Oct 13 10:41:04 np0005485008 kernel: Linux version 5.14.0-621.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025
Oct 13 10:41:04 np0005485008 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 13 10:41:04 np0005485008 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 13 10:41:04 np0005485008 kernel: BIOS-provided physical RAM map:
Oct 13 10:41:04 np0005485008 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 13 10:41:04 np0005485008 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 13 10:41:04 np0005485008 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 13 10:41:04 np0005485008 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 13 10:41:04 np0005485008 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 13 10:41:04 np0005485008 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 13 10:41:04 np0005485008 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 13 10:41:04 np0005485008 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct 13 10:41:04 np0005485008 kernel: NX (Execute Disable) protection: active
Oct 13 10:41:04 np0005485008 kernel: APIC: Static calls initialized
Oct 13 10:41:04 np0005485008 kernel: SMBIOS 2.8 present.
Oct 13 10:41:04 np0005485008 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 13 10:41:04 np0005485008 kernel: Hypervisor detected: KVM
Oct 13 10:41:04 np0005485008 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 13 10:41:04 np0005485008 kernel: kvm-clock: using sched offset of 4278027084 cycles
Oct 13 10:41:04 np0005485008 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 13 10:41:04 np0005485008 kernel: tsc: Detected 2800.000 MHz processor
Oct 13 10:41:04 np0005485008 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 13 10:41:04 np0005485008 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 13 10:41:04 np0005485008 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 13 10:41:04 np0005485008 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 13 10:41:04 np0005485008 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 13 10:41:04 np0005485008 kernel: Using GB pages for direct mapping
Oct 13 10:41:04 np0005485008 kernel: RAMDISK: [mem 0x2d858000-0x32c23fff]
Oct 13 10:41:04 np0005485008 kernel: ACPI: Early table checksum verification disabled
Oct 13 10:41:04 np0005485008 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 13 10:41:04 np0005485008 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 13 10:41:04 np0005485008 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 13 10:41:04 np0005485008 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 13 10:41:04 np0005485008 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 13 10:41:04 np0005485008 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 13 10:41:04 np0005485008 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 13 10:41:04 np0005485008 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct 13 10:41:04 np0005485008 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct 13 10:41:04 np0005485008 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 13 10:41:04 np0005485008 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct 13 10:41:04 np0005485008 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct 13 10:41:04 np0005485008 kernel: No NUMA configuration found
Oct 13 10:41:04 np0005485008 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct 13 10:41:04 np0005485008 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct 13 10:41:04 np0005485008 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct 13 10:41:04 np0005485008 kernel: Zone ranges:
Oct 13 10:41:04 np0005485008 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 13 10:41:04 np0005485008 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 13 10:41:04 np0005485008 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct 13 10:41:04 np0005485008 kernel:  Device   empty
Oct 13 10:41:04 np0005485008 kernel: Movable zone start for each node
Oct 13 10:41:04 np0005485008 kernel: Early memory node ranges
Oct 13 10:41:04 np0005485008 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 13 10:41:04 np0005485008 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 13 10:41:04 np0005485008 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct 13 10:41:04 np0005485008 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct 13 10:41:04 np0005485008 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 13 10:41:04 np0005485008 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 13 10:41:04 np0005485008 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 13 10:41:04 np0005485008 kernel: ACPI: PM-Timer IO Port: 0x608
Oct 13 10:41:04 np0005485008 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 13 10:41:04 np0005485008 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 13 10:41:04 np0005485008 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 13 10:41:04 np0005485008 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 13 10:41:04 np0005485008 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 13 10:41:04 np0005485008 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 13 10:41:04 np0005485008 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 13 10:41:04 np0005485008 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 13 10:41:04 np0005485008 kernel: TSC deadline timer available
Oct 13 10:41:04 np0005485008 kernel: CPU topo: Max. logical packages:   8
Oct 13 10:41:04 np0005485008 kernel: CPU topo: Max. logical dies:       8
Oct 13 10:41:04 np0005485008 kernel: CPU topo: Max. dies per package:   1
Oct 13 10:41:04 np0005485008 kernel: CPU topo: Max. threads per core:   1
Oct 13 10:41:04 np0005485008 kernel: CPU topo: Num. cores per package:     1
Oct 13 10:41:04 np0005485008 kernel: CPU topo: Num. threads per package:   1
Oct 13 10:41:04 np0005485008 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct 13 10:41:04 np0005485008 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 13 10:41:04 np0005485008 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 13 10:41:04 np0005485008 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 13 10:41:04 np0005485008 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 13 10:41:04 np0005485008 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 13 10:41:04 np0005485008 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 13 10:41:04 np0005485008 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 13 10:41:04 np0005485008 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 13 10:41:04 np0005485008 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 13 10:41:04 np0005485008 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 13 10:41:04 np0005485008 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 13 10:41:04 np0005485008 kernel: Booting paravirtualized kernel on KVM
Oct 13 10:41:04 np0005485008 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 13 10:41:04 np0005485008 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 13 10:41:04 np0005485008 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct 13 10:41:04 np0005485008 kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 13 10:41:04 np0005485008 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 13 10:41:04 np0005485008 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64", will be passed to user space.
Oct 13 10:41:04 np0005485008 kernel: random: crng init done
Oct 13 10:41:04 np0005485008 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 13 10:41:04 np0005485008 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 13 10:41:04 np0005485008 kernel: Fallback order for Node 0: 0 
Oct 13 10:41:04 np0005485008 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 13 10:41:04 np0005485008 kernel: Policy zone: Normal
Oct 13 10:41:04 np0005485008 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 13 10:41:04 np0005485008 kernel: software IO TLB: area num 8.
Oct 13 10:41:04 np0005485008 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 13 10:41:04 np0005485008 kernel: ftrace: allocating 49162 entries in 193 pages
Oct 13 10:41:04 np0005485008 kernel: ftrace: allocated 193 pages with 3 groups
Oct 13 10:41:04 np0005485008 kernel: Dynamic Preempt: voluntary
Oct 13 10:41:04 np0005485008 kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 13 10:41:04 np0005485008 kernel: rcu: #011RCU event tracing is enabled.
Oct 13 10:41:04 np0005485008 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 13 10:41:04 np0005485008 kernel: #011Trampoline variant of Tasks RCU enabled.
Oct 13 10:41:04 np0005485008 kernel: #011Rude variant of Tasks RCU enabled.
Oct 13 10:41:04 np0005485008 kernel: #011Tracing variant of Tasks RCU enabled.
Oct 13 10:41:04 np0005485008 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 13 10:41:04 np0005485008 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 13 10:41:04 np0005485008 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 13 10:41:04 np0005485008 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 13 10:41:04 np0005485008 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 13 10:41:04 np0005485008 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 13 10:41:04 np0005485008 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 13 10:41:04 np0005485008 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 13 10:41:04 np0005485008 kernel: Console: colour VGA+ 80x25
Oct 13 10:41:04 np0005485008 kernel: printk: console [ttyS0] enabled
Oct 13 10:41:04 np0005485008 kernel: ACPI: Core revision 20230331
Oct 13 10:41:04 np0005485008 kernel: APIC: Switch to symmetric I/O mode setup
Oct 13 10:41:04 np0005485008 kernel: x2apic enabled
Oct 13 10:41:04 np0005485008 kernel: APIC: Switched APIC routing to: physical x2apic
Oct 13 10:41:04 np0005485008 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 13 10:41:04 np0005485008 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Oct 13 10:41:04 np0005485008 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 13 10:41:04 np0005485008 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 13 10:41:04 np0005485008 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 13 10:41:04 np0005485008 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 13 10:41:04 np0005485008 kernel: Spectre V2 : Mitigation: Retpolines
Oct 13 10:41:04 np0005485008 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 13 10:41:04 np0005485008 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 13 10:41:04 np0005485008 kernel: RETBleed: Mitigation: untrained return thunk
Oct 13 10:41:04 np0005485008 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 13 10:41:04 np0005485008 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 13 10:41:04 np0005485008 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 13 10:41:04 np0005485008 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 13 10:41:04 np0005485008 kernel: x86/bugs: return thunk changed
Oct 13 10:41:04 np0005485008 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 13 10:41:04 np0005485008 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 13 10:41:04 np0005485008 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 13 10:41:04 np0005485008 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 13 10:41:04 np0005485008 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 13 10:41:04 np0005485008 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct 13 10:41:04 np0005485008 kernel: Freeing SMP alternatives memory: 40K
Oct 13 10:41:04 np0005485008 kernel: pid_max: default: 32768 minimum: 301
Oct 13 10:41:04 np0005485008 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 13 10:41:04 np0005485008 kernel: landlock: Up and running.
Oct 13 10:41:04 np0005485008 kernel: Yama: becoming mindful.
Oct 13 10:41:04 np0005485008 kernel: SELinux:  Initializing.
Oct 13 10:41:04 np0005485008 kernel: LSM support for eBPF active
Oct 13 10:41:04 np0005485008 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 13 10:41:04 np0005485008 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 13 10:41:04 np0005485008 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 13 10:41:04 np0005485008 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 13 10:41:04 np0005485008 kernel: ... version:                0
Oct 13 10:41:04 np0005485008 kernel: ... bit width:              48
Oct 13 10:41:04 np0005485008 kernel: ... generic registers:      6
Oct 13 10:41:04 np0005485008 kernel: ... value mask:             0000ffffffffffff
Oct 13 10:41:04 np0005485008 kernel: ... max period:             00007fffffffffff
Oct 13 10:41:04 np0005485008 kernel: ... fixed-purpose events:   0
Oct 13 10:41:04 np0005485008 kernel: ... event mask:             000000000000003f
Oct 13 10:41:04 np0005485008 kernel: signal: max sigframe size: 1776
Oct 13 10:41:04 np0005485008 kernel: rcu: Hierarchical SRCU implementation.
Oct 13 10:41:04 np0005485008 kernel: rcu: #011Max phase no-delay instances is 400.
Oct 13 10:41:04 np0005485008 kernel: smp: Bringing up secondary CPUs ...
Oct 13 10:41:04 np0005485008 kernel: smpboot: x86: Booting SMP configuration:
Oct 13 10:41:04 np0005485008 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 13 10:41:04 np0005485008 kernel: smp: Brought up 1 node, 8 CPUs
Oct 13 10:41:04 np0005485008 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Oct 13 10:41:04 np0005485008 kernel: node 0 deferred pages initialised in 9ms
Oct 13 10:41:04 np0005485008 kernel: Memory: 7766144K/8388068K available (16384K kernel code, 5784K rwdata, 13864K rodata, 4188K init, 7196K bss, 616212K reserved, 0K cma-reserved)
Oct 13 10:41:04 np0005485008 kernel: devtmpfs: initialized
Oct 13 10:41:04 np0005485008 kernel: x86/mm: Memory block size: 128MB
Oct 13 10:41:04 np0005485008 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 13 10:41:04 np0005485008 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 13 10:41:04 np0005485008 kernel: pinctrl core: initialized pinctrl subsystem
Oct 13 10:41:04 np0005485008 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 13 10:41:04 np0005485008 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 13 10:41:04 np0005485008 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 13 10:41:04 np0005485008 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 13 10:41:04 np0005485008 kernel: audit: initializing netlink subsys (disabled)
Oct 13 10:41:04 np0005485008 kernel: audit: type=2000 audit(1760366463.010:1): state=initialized audit_enabled=0 res=1
Oct 13 10:41:04 np0005485008 kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 13 10:41:04 np0005485008 kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 13 10:41:04 np0005485008 kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 13 10:41:04 np0005485008 kernel: cpuidle: using governor menu
Oct 13 10:41:04 np0005485008 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 13 10:41:04 np0005485008 kernel: PCI: Using configuration type 1 for base access
Oct 13 10:41:04 np0005485008 kernel: PCI: Using configuration type 1 for extended access
Oct 13 10:41:04 np0005485008 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 13 10:41:04 np0005485008 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 13 10:41:04 np0005485008 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 13 10:41:04 np0005485008 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 13 10:41:04 np0005485008 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 13 10:41:04 np0005485008 kernel: Demotion targets for Node 0: null
Oct 13 10:41:04 np0005485008 kernel: cryptd: max_cpu_qlen set to 1000
Oct 13 10:41:04 np0005485008 kernel: ACPI: Added _OSI(Module Device)
Oct 13 10:41:04 np0005485008 kernel: ACPI: Added _OSI(Processor Device)
Oct 13 10:41:04 np0005485008 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 13 10:41:04 np0005485008 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 13 10:41:04 np0005485008 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 13 10:41:04 np0005485008 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 13 10:41:04 np0005485008 kernel: ACPI: Interpreter enabled
Oct 13 10:41:04 np0005485008 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 13 10:41:04 np0005485008 kernel: ACPI: Using IOAPIC for interrupt routing
Oct 13 10:41:04 np0005485008 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 13 10:41:04 np0005485008 kernel: PCI: Using E820 reservations for host bridge windows
Oct 13 10:41:04 np0005485008 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 13 10:41:04 np0005485008 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 13 10:41:04 np0005485008 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [3] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [4] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [5] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [6] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [7] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [8] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [9] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [10] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [11] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [12] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [13] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [14] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [15] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [16] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [17] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [18] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [19] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [20] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [21] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [22] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [23] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [24] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [25] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [26] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [27] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [28] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [29] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [30] registered
Oct 13 10:41:04 np0005485008 kernel: acpiphp: Slot [31] registered
Oct 13 10:41:04 np0005485008 kernel: PCI host bridge to bus 0000:00
Oct 13 10:41:04 np0005485008 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 13 10:41:04 np0005485008 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 13 10:41:04 np0005485008 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 13 10:41:04 np0005485008 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 13 10:41:04 np0005485008 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct 13 10:41:04 np0005485008 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 13 10:41:04 np0005485008 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 13 10:41:04 np0005485008 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 13 10:41:04 np0005485008 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 13 10:41:04 np0005485008 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 13 10:41:04 np0005485008 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 13 10:41:04 np0005485008 kernel: iommu: Default domain type: Translated
Oct 13 10:41:04 np0005485008 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 13 10:41:04 np0005485008 kernel: SCSI subsystem initialized
Oct 13 10:41:04 np0005485008 kernel: ACPI: bus type USB registered
Oct 13 10:41:04 np0005485008 kernel: usbcore: registered new interface driver usbfs
Oct 13 10:41:04 np0005485008 kernel: usbcore: registered new interface driver hub
Oct 13 10:41:04 np0005485008 kernel: usbcore: registered new device driver usb
Oct 13 10:41:04 np0005485008 kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 13 10:41:04 np0005485008 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 13 10:41:04 np0005485008 kernel: PTP clock support registered
Oct 13 10:41:04 np0005485008 kernel: EDAC MC: Ver: 3.0.0
Oct 13 10:41:04 np0005485008 kernel: NetLabel: Initializing
Oct 13 10:41:04 np0005485008 kernel: NetLabel:  domain hash size = 128
Oct 13 10:41:04 np0005485008 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 13 10:41:04 np0005485008 kernel: NetLabel:  unlabeled traffic allowed by default
Oct 13 10:41:04 np0005485008 kernel: PCI: Using ACPI for IRQ routing
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 13 10:41:04 np0005485008 kernel: vgaarb: loaded
Oct 13 10:41:04 np0005485008 kernel: clocksource: Switched to clocksource kvm-clock
Oct 13 10:41:04 np0005485008 kernel: VFS: Disk quotas dquot_6.6.0
Oct 13 10:41:04 np0005485008 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 13 10:41:04 np0005485008 kernel: pnp: PnP ACPI init
Oct 13 10:41:04 np0005485008 kernel: pnp: PnP ACPI: found 5 devices
Oct 13 10:41:04 np0005485008 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 13 10:41:04 np0005485008 kernel: NET: Registered PF_INET protocol family
Oct 13 10:41:04 np0005485008 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 13 10:41:04 np0005485008 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 13 10:41:04 np0005485008 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 13 10:41:04 np0005485008 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 13 10:41:04 np0005485008 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 13 10:41:04 np0005485008 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 13 10:41:04 np0005485008 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 13 10:41:04 np0005485008 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 13 10:41:04 np0005485008 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 13 10:41:04 np0005485008 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 13 10:41:04 np0005485008 kernel: NET: Registered PF_XDP protocol family
Oct 13 10:41:04 np0005485008 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 13 10:41:04 np0005485008 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 13 10:41:04 np0005485008 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 13 10:41:04 np0005485008 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 13 10:41:04 np0005485008 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 13 10:41:04 np0005485008 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 13 10:41:04 np0005485008 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 72061 usecs
Oct 13 10:41:04 np0005485008 kernel: PCI: CLS 0 bytes, default 64
Oct 13 10:41:04 np0005485008 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 13 10:41:04 np0005485008 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct 13 10:41:04 np0005485008 kernel: Trying to unpack rootfs image as initramfs...
Oct 13 10:41:04 np0005485008 kernel: ACPI: bus type thunderbolt registered
Oct 13 10:41:04 np0005485008 kernel: Initialise system trusted keyrings
Oct 13 10:41:04 np0005485008 kernel: Key type blacklist registered
Oct 13 10:41:04 np0005485008 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 13 10:41:04 np0005485008 kernel: zbud: loaded
Oct 13 10:41:04 np0005485008 kernel: integrity: Platform Keyring initialized
Oct 13 10:41:04 np0005485008 kernel: integrity: Machine keyring initialized
Oct 13 10:41:04 np0005485008 kernel: Freeing initrd memory: 85808K
Oct 13 10:41:04 np0005485008 kernel: NET: Registered PF_ALG protocol family
Oct 13 10:41:04 np0005485008 kernel: xor: automatically using best checksumming function   avx       
Oct 13 10:41:04 np0005485008 kernel: Key type asymmetric registered
Oct 13 10:41:04 np0005485008 kernel: Asymmetric key parser 'x509' registered
Oct 13 10:41:04 np0005485008 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 13 10:41:04 np0005485008 kernel: io scheduler mq-deadline registered
Oct 13 10:41:04 np0005485008 kernel: io scheduler kyber registered
Oct 13 10:41:04 np0005485008 kernel: io scheduler bfq registered
Oct 13 10:41:04 np0005485008 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 13 10:41:04 np0005485008 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 13 10:41:04 np0005485008 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 13 10:41:04 np0005485008 kernel: ACPI: button: Power Button [PWRF]
Oct 13 10:41:04 np0005485008 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 13 10:41:04 np0005485008 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 13 10:41:04 np0005485008 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 13 10:41:04 np0005485008 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 13 10:41:04 np0005485008 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 13 10:41:04 np0005485008 kernel: Non-volatile memory driver v1.3
Oct 13 10:41:04 np0005485008 kernel: rdac: device handler registered
Oct 13 10:41:04 np0005485008 kernel: hp_sw: device handler registered
Oct 13 10:41:04 np0005485008 kernel: emc: device handler registered
Oct 13 10:41:04 np0005485008 kernel: alua: device handler registered
Oct 13 10:41:04 np0005485008 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 13 10:41:04 np0005485008 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 13 10:41:04 np0005485008 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 13 10:41:04 np0005485008 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct 13 10:41:04 np0005485008 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 13 10:41:04 np0005485008 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 13 10:41:04 np0005485008 kernel: usb usb1: Product: UHCI Host Controller
Oct 13 10:41:04 np0005485008 kernel: usb usb1: Manufacturer: Linux 5.14.0-621.el9.x86_64 uhci_hcd
Oct 13 10:41:04 np0005485008 kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 13 10:41:04 np0005485008 kernel: hub 1-0:1.0: USB hub found
Oct 13 10:41:04 np0005485008 kernel: hub 1-0:1.0: 2 ports detected
Oct 13 10:41:04 np0005485008 kernel: usbcore: registered new interface driver usbserial_generic
Oct 13 10:41:04 np0005485008 kernel: usbserial: USB Serial support registered for generic
Oct 13 10:41:04 np0005485008 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 13 10:41:04 np0005485008 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 13 10:41:04 np0005485008 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 13 10:41:04 np0005485008 kernel: mousedev: PS/2 mouse device common for all mice
Oct 13 10:41:04 np0005485008 kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 13 10:41:04 np0005485008 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 13 10:41:04 np0005485008 kernel: rtc_cmos 00:04: registered as rtc0
Oct 13 10:41:04 np0005485008 kernel: rtc_cmos 00:04: setting system clock to 2025-10-13T14:41:03 UTC (1760366463)
Oct 13 10:41:04 np0005485008 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 13 10:41:04 np0005485008 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 13 10:41:04 np0005485008 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 13 10:41:04 np0005485008 kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 13 10:41:04 np0005485008 kernel: usbcore: registered new interface driver usbhid
Oct 13 10:41:04 np0005485008 kernel: usbhid: USB HID core driver
Oct 13 10:41:04 np0005485008 kernel: drop_monitor: Initializing network drop monitor service
Oct 13 10:41:04 np0005485008 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 13 10:41:04 np0005485008 kernel: Initializing XFRM netlink socket
Oct 13 10:41:04 np0005485008 kernel: NET: Registered PF_INET6 protocol family
Oct 13 10:41:04 np0005485008 kernel: Segment Routing with IPv6
Oct 13 10:41:04 np0005485008 kernel: NET: Registered PF_PACKET protocol family
Oct 13 10:41:04 np0005485008 kernel: mpls_gso: MPLS GSO support
Oct 13 10:41:04 np0005485008 kernel: IPI shorthand broadcast: enabled
Oct 13 10:41:04 np0005485008 kernel: AVX2 version of gcm_enc/dec engaged.
Oct 13 10:41:04 np0005485008 kernel: AES CTR mode by8 optimization enabled
Oct 13 10:41:04 np0005485008 kernel: sched_clock: Marking stable (1268005813, 145379638)->(1547031378, -133645927)
Oct 13 10:41:04 np0005485008 kernel: registered taskstats version 1
Oct 13 10:41:04 np0005485008 kernel: Loading compiled-in X.509 certificates
Oct 13 10:41:04 np0005485008 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9'
Oct 13 10:41:04 np0005485008 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 13 10:41:04 np0005485008 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 13 10:41:04 np0005485008 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 13 10:41:04 np0005485008 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 13 10:41:04 np0005485008 kernel: Demotion targets for Node 0: null
Oct 13 10:41:04 np0005485008 kernel: page_owner is disabled
Oct 13 10:41:04 np0005485008 kernel: Key type .fscrypt registered
Oct 13 10:41:04 np0005485008 kernel: Key type fscrypt-provisioning registered
Oct 13 10:41:04 np0005485008 kernel: Key type big_key registered
Oct 13 10:41:04 np0005485008 kernel: Key type encrypted registered
Oct 13 10:41:04 np0005485008 kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 13 10:41:04 np0005485008 kernel: Loading compiled-in module X.509 certificates
Oct 13 10:41:04 np0005485008 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9'
Oct 13 10:41:04 np0005485008 kernel: ima: Allocated hash algorithm: sha256
Oct 13 10:41:04 np0005485008 kernel: ima: No architecture policies found
Oct 13 10:41:04 np0005485008 kernel: evm: Initialising EVM extended attributes:
Oct 13 10:41:04 np0005485008 kernel: evm: security.selinux
Oct 13 10:41:04 np0005485008 kernel: evm: security.SMACK64 (disabled)
Oct 13 10:41:04 np0005485008 kernel: evm: security.SMACK64EXEC (disabled)
Oct 13 10:41:04 np0005485008 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 13 10:41:04 np0005485008 kernel: evm: security.SMACK64MMAP (disabled)
Oct 13 10:41:04 np0005485008 kernel: evm: security.apparmor (disabled)
Oct 13 10:41:04 np0005485008 kernel: evm: security.ima
Oct 13 10:41:04 np0005485008 kernel: evm: security.capability
Oct 13 10:41:04 np0005485008 kernel: evm: HMAC attrs: 0x1
Oct 13 10:41:04 np0005485008 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 13 10:41:04 np0005485008 kernel: Running certificate verification RSA selftest
Oct 13 10:41:04 np0005485008 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 13 10:41:04 np0005485008 kernel: Running certificate verification ECDSA selftest
Oct 13 10:41:04 np0005485008 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 13 10:41:04 np0005485008 kernel: clk: Disabling unused clocks
Oct 13 10:41:04 np0005485008 kernel: Freeing unused decrypted memory: 2028K
Oct 13 10:41:04 np0005485008 kernel: Freeing unused kernel image (initmem) memory: 4188K
Oct 13 10:41:04 np0005485008 kernel: Write protecting the kernel read-only data: 30720k
Oct 13 10:41:04 np0005485008 kernel: Freeing unused kernel image (rodata/data gap) memory: 472K
Oct 13 10:41:04 np0005485008 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 13 10:41:04 np0005485008 kernel: Run /init as init process
Oct 13 10:41:04 np0005485008 systemd: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 13 10:41:04 np0005485008 systemd: Detected virtualization kvm.
Oct 13 10:41:04 np0005485008 systemd: Detected architecture x86-64.
Oct 13 10:41:04 np0005485008 systemd: Running in initrd.
Oct 13 10:41:04 np0005485008 systemd: No hostname configured, using default hostname.
Oct 13 10:41:04 np0005485008 systemd: Hostname set to <localhost>.
Oct 13 10:41:04 np0005485008 systemd: Initializing machine ID from VM UUID.
Oct 13 10:41:04 np0005485008 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 13 10:41:04 np0005485008 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 13 10:41:04 np0005485008 kernel: usb 1-1: Product: QEMU USB Tablet
Oct 13 10:41:04 np0005485008 kernel: usb 1-1: Manufacturer: QEMU
Oct 13 10:41:04 np0005485008 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 13 10:41:04 np0005485008 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 13 10:41:04 np0005485008 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 13 10:41:04 np0005485008 systemd: Queued start job for default target Initrd Default Target.
Oct 13 10:41:04 np0005485008 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct 13 10:41:04 np0005485008 systemd: Reached target Local Encrypted Volumes.
Oct 13 10:41:04 np0005485008 systemd: Reached target Initrd /usr File System.
Oct 13 10:41:04 np0005485008 systemd: Reached target Local File Systems.
Oct 13 10:41:04 np0005485008 systemd: Reached target Path Units.
Oct 13 10:41:04 np0005485008 systemd: Reached target Slice Units.
Oct 13 10:41:04 np0005485008 systemd: Reached target Swaps.
Oct 13 10:41:04 np0005485008 systemd: Reached target Timer Units.
Oct 13 10:41:04 np0005485008 systemd: Listening on D-Bus System Message Bus Socket.
Oct 13 10:41:04 np0005485008 systemd: Listening on Journal Socket (/dev/log).
Oct 13 10:41:04 np0005485008 systemd: Listening on Journal Socket.
Oct 13 10:41:04 np0005485008 systemd: Listening on udev Control Socket.
Oct 13 10:41:04 np0005485008 systemd: Listening on udev Kernel Socket.
Oct 13 10:41:04 np0005485008 systemd: Reached target Socket Units.
Oct 13 10:41:04 np0005485008 systemd: Starting Create List of Static Device Nodes...
Oct 13 10:41:04 np0005485008 systemd: Starting Journal Service...
Oct 13 10:41:04 np0005485008 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 13 10:41:04 np0005485008 systemd: Starting Apply Kernel Variables...
Oct 13 10:41:04 np0005485008 systemd: Starting Create System Users...
Oct 13 10:41:04 np0005485008 systemd: Starting Setup Virtual Console...
Oct 13 10:41:04 np0005485008 systemd: Finished Create List of Static Device Nodes.
Oct 13 10:41:04 np0005485008 systemd: Finished Apply Kernel Variables.
Oct 13 10:41:04 np0005485008 systemd: Finished Create System Users.
Oct 13 10:41:04 np0005485008 systemd-journald[305]: Journal started
Oct 13 10:41:04 np0005485008 systemd-journald[305]: Runtime Journal (/run/log/journal/48631c7f32de410bace10785fa2d7327) is 8.0M, max 153.6M, 145.6M free.
Oct 13 10:41:04 np0005485008 systemd-sysusers[308]: Creating group 'users' with GID 100.
Oct 13 10:41:04 np0005485008 systemd-sysusers[308]: Creating group 'dbus' with GID 81.
Oct 13 10:41:04 np0005485008 systemd-sysusers[308]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 13 10:41:04 np0005485008 systemd: Started Journal Service.
Oct 13 10:41:04 np0005485008 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 13 10:41:04 np0005485008 systemd[1]: Starting Create Volatile Files and Directories...
Oct 13 10:41:04 np0005485008 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 13 10:41:04 np0005485008 systemd[1]: Finished Create Volatile Files and Directories.
Oct 13 10:41:04 np0005485008 systemd[1]: Finished Setup Virtual Console.
Oct 13 10:41:04 np0005485008 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 13 10:41:04 np0005485008 systemd[1]: Starting dracut cmdline hook...
Oct 13 10:41:04 np0005485008 dracut-cmdline[323]: dracut-9 dracut-057-102.git20250818.el9
Oct 13 10:41:04 np0005485008 dracut-cmdline[323]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 13 10:41:04 np0005485008 systemd[1]: Finished dracut cmdline hook.
Oct 13 10:41:04 np0005485008 systemd[1]: Starting dracut pre-udev hook...
Oct 13 10:41:04 np0005485008 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 13 10:41:04 np0005485008 kernel: device-mapper: uevent: version 1.0.3
Oct 13 10:41:04 np0005485008 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 13 10:41:04 np0005485008 kernel: RPC: Registered named UNIX socket transport module.
Oct 13 10:41:04 np0005485008 kernel: RPC: Registered udp transport module.
Oct 13 10:41:04 np0005485008 kernel: RPC: Registered tcp transport module.
Oct 13 10:41:04 np0005485008 kernel: RPC: Registered tcp-with-tls transport module.
Oct 13 10:41:04 np0005485008 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 13 10:41:04 np0005485008 rpc.statd[442]: Version 2.5.4 starting
Oct 13 10:41:04 np0005485008 rpc.statd[442]: Initializing NSM state
Oct 13 10:41:04 np0005485008 rpc.idmapd[447]: Setting log level to 0
Oct 13 10:41:04 np0005485008 systemd[1]: Finished dracut pre-udev hook.
Oct 13 10:41:04 np0005485008 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 13 10:41:04 np0005485008 systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Oct 13 10:41:04 np0005485008 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 13 10:41:04 np0005485008 systemd[1]: Starting dracut pre-trigger hook...
Oct 13 10:41:04 np0005485008 systemd[1]: Finished dracut pre-trigger hook.
Oct 13 10:41:04 np0005485008 systemd[1]: Starting Coldplug All udev Devices...
Oct 13 10:41:05 np0005485008 systemd[1]: Created slice Slice /system/modprobe.
Oct 13 10:41:05 np0005485008 systemd[1]: Starting Load Kernel Module configfs...
Oct 13 10:41:05 np0005485008 systemd[1]: Finished Coldplug All udev Devices.
Oct 13 10:41:05 np0005485008 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 13 10:41:05 np0005485008 systemd[1]: Finished Load Kernel Module configfs.
Oct 13 10:41:05 np0005485008 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 13 10:41:05 np0005485008 systemd[1]: Reached target Network.
Oct 13 10:41:05 np0005485008 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 13 10:41:05 np0005485008 systemd[1]: Starting dracut initqueue hook...
Oct 13 10:41:05 np0005485008 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct 13 10:41:05 np0005485008 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 13 10:41:05 np0005485008 kernel: vda: vda1
Oct 13 10:41:05 np0005485008 systemd-udevd[465]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 10:41:05 np0005485008 kernel: scsi host0: ata_piix
Oct 13 10:41:05 np0005485008 kernel: scsi host1: ata_piix
Oct 13 10:41:05 np0005485008 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct 13 10:41:05 np0005485008 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct 13 10:41:05 np0005485008 systemd[1]: Mounting Kernel Configuration File System...
Oct 13 10:41:05 np0005485008 systemd[1]: Mounted Kernel Configuration File System.
Oct 13 10:41:05 np0005485008 systemd[1]: Found device /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3.
Oct 13 10:41:05 np0005485008 systemd[1]: Reached target Initrd Root Device.
Oct 13 10:41:05 np0005485008 systemd[1]: Reached target System Initialization.
Oct 13 10:41:05 np0005485008 systemd[1]: Reached target Basic System.
Oct 13 10:41:05 np0005485008 kernel: ata1: found unknown device (class 0)
Oct 13 10:41:05 np0005485008 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 13 10:41:05 np0005485008 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 13 10:41:05 np0005485008 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 13 10:41:05 np0005485008 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 13 10:41:05 np0005485008 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 13 10:41:05 np0005485008 systemd[1]: Finished dracut initqueue hook.
Oct 13 10:41:05 np0005485008 systemd[1]: Reached target Preparation for Remote File Systems.
Oct 13 10:41:05 np0005485008 systemd[1]: Reached target Remote Encrypted Volumes.
Oct 13 10:41:05 np0005485008 systemd[1]: Reached target Remote File Systems.
Oct 13 10:41:05 np0005485008 systemd[1]: Starting dracut pre-mount hook...
Oct 13 10:41:05 np0005485008 systemd[1]: Finished dracut pre-mount hook.
Oct 13 10:41:05 np0005485008 systemd[1]: Starting File System Check on /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3...
Oct 13 10:41:05 np0005485008 systemd-fsck[552]: /usr/sbin/fsck.xfs: XFS file system.
Oct 13 10:41:05 np0005485008 systemd[1]: Finished File System Check on /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3.
Oct 13 10:41:05 np0005485008 systemd[1]: Mounting /sysroot...
Oct 13 10:41:06 np0005485008 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 13 10:41:06 np0005485008 kernel: XFS (vda1): Mounting V5 Filesystem 9839e2e1-98a2-4594-b609-79d514deb0a3
Oct 13 10:41:06 np0005485008 kernel: XFS (vda1): Ending clean mount
Oct 13 10:41:06 np0005485008 systemd[1]: Mounted /sysroot.
Oct 13 10:41:06 np0005485008 systemd[1]: Reached target Initrd Root File System.
Oct 13 10:41:06 np0005485008 systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 13 10:41:06 np0005485008 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 13 10:41:06 np0005485008 systemd[1]: Reached target Initrd File Systems.
Oct 13 10:41:06 np0005485008 systemd[1]: Reached target Initrd Default Target.
Oct 13 10:41:06 np0005485008 systemd[1]: Starting dracut mount hook...
Oct 13 10:41:06 np0005485008 systemd[1]: Finished dracut mount hook.
Oct 13 10:41:06 np0005485008 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 13 10:41:06 np0005485008 rpc.idmapd[447]: exiting on signal 15
Oct 13 10:41:06 np0005485008 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 13 10:41:06 np0005485008 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped target Network.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped target Timer Units.
Oct 13 10:41:06 np0005485008 systemd[1]: dbus.socket: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 13 10:41:06 np0005485008 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped target Initrd Default Target.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped target Basic System.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped target Initrd Root Device.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped target Initrd /usr File System.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped target Path Units.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped target Remote File Systems.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped target Slice Units.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped target Socket Units.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped target System Initialization.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped target Local File Systems.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped target Swaps.
Oct 13 10:41:06 np0005485008 systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped dracut mount hook.
Oct 13 10:41:06 np0005485008 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped dracut pre-mount hook.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped target Local Encrypted Volumes.
Oct 13 10:41:06 np0005485008 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 13 10:41:06 np0005485008 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped dracut initqueue hook.
Oct 13 10:41:06 np0005485008 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped Apply Kernel Variables.
Oct 13 10:41:06 np0005485008 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped Create Volatile Files and Directories.
Oct 13 10:41:06 np0005485008 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped Coldplug All udev Devices.
Oct 13 10:41:06 np0005485008 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped dracut pre-trigger hook.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 13 10:41:06 np0005485008 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped Setup Virtual Console.
Oct 13 10:41:06 np0005485008 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 13 10:41:06 np0005485008 systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 13 10:41:06 np0005485008 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: Closed udev Control Socket.
Oct 13 10:41:06 np0005485008 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: Closed udev Kernel Socket.
Oct 13 10:41:06 np0005485008 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped dracut pre-udev hook.
Oct 13 10:41:06 np0005485008 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped dracut cmdline hook.
Oct 13 10:41:06 np0005485008 systemd[1]: Starting Cleanup udev Database...
Oct 13 10:41:06 np0005485008 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 13 10:41:06 np0005485008 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped Create List of Static Device Nodes.
Oct 13 10:41:06 np0005485008 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: Stopped Create System Users.
Oct 13 10:41:06 np0005485008 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 13 10:41:06 np0005485008 systemd[1]: Finished Cleanup udev Database.
Oct 13 10:41:06 np0005485008 systemd[1]: Reached target Switch Root.
Oct 13 10:41:06 np0005485008 systemd[1]: Starting Switch Root...
Oct 13 10:41:06 np0005485008 systemd[1]: Switching root.
Oct 13 10:41:06 np0005485008 systemd-journald[305]: Journal stopped
Oct 13 10:41:07 np0005485008 systemd-journald: Received SIGTERM from PID 1 (systemd).
Oct 13 10:41:07 np0005485008 kernel: audit: type=1404 audit(1760366466.737:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct 13 10:41:07 np0005485008 kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 10:41:07 np0005485008 kernel: SELinux:  policy capability open_perms=1
Oct 13 10:41:07 np0005485008 kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 10:41:07 np0005485008 kernel: SELinux:  policy capability always_check_network=0
Oct 13 10:41:07 np0005485008 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 10:41:07 np0005485008 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 10:41:07 np0005485008 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 10:41:07 np0005485008 kernel: audit: type=1403 audit(1760366466.914:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct 13 10:41:07 np0005485008 systemd: Successfully loaded SELinux policy in 181.755ms.
Oct 13 10:41:07 np0005485008 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.352ms.
Oct 13 10:41:07 np0005485008 systemd: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 13 10:41:07 np0005485008 systemd: Detected virtualization kvm.
Oct 13 10:41:07 np0005485008 systemd: Detected architecture x86-64.
Oct 13 10:41:07 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 10:41:07 np0005485008 systemd: initrd-switch-root.service: Deactivated successfully.
Oct 13 10:41:07 np0005485008 systemd: Stopped Switch Root.
Oct 13 10:41:07 np0005485008 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct 13 10:41:07 np0005485008 systemd: Created slice Slice /system/getty.
Oct 13 10:41:07 np0005485008 systemd: Created slice Slice /system/serial-getty.
Oct 13 10:41:07 np0005485008 systemd: Created slice Slice /system/sshd-keygen.
Oct 13 10:41:07 np0005485008 systemd: Created slice User and Session Slice.
Oct 13 10:41:07 np0005485008 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct 13 10:41:07 np0005485008 systemd: Started Forward Password Requests to Wall Directory Watch.
Oct 13 10:41:07 np0005485008 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct 13 10:41:07 np0005485008 systemd: Reached target Local Encrypted Volumes.
Oct 13 10:41:07 np0005485008 systemd: Stopped target Switch Root.
Oct 13 10:41:07 np0005485008 systemd: Stopped target Initrd File Systems.
Oct 13 10:41:07 np0005485008 systemd: Stopped target Initrd Root File System.
Oct 13 10:41:07 np0005485008 systemd: Reached target Local Integrity Protected Volumes.
Oct 13 10:41:07 np0005485008 systemd: Reached target Path Units.
Oct 13 10:41:07 np0005485008 systemd: Reached target rpc_pipefs.target.
Oct 13 10:41:07 np0005485008 systemd: Reached target Slice Units.
Oct 13 10:41:07 np0005485008 systemd: Reached target Swaps.
Oct 13 10:41:07 np0005485008 systemd: Reached target Local Verity Protected Volumes.
Oct 13 10:41:07 np0005485008 systemd: Listening on RPCbind Server Activation Socket.
Oct 13 10:41:07 np0005485008 systemd: Reached target RPC Port Mapper.
Oct 13 10:41:07 np0005485008 systemd: Listening on Process Core Dump Socket.
Oct 13 10:41:07 np0005485008 systemd: Listening on initctl Compatibility Named Pipe.
Oct 13 10:41:07 np0005485008 systemd: Listening on udev Control Socket.
Oct 13 10:41:07 np0005485008 systemd: Listening on udev Kernel Socket.
Oct 13 10:41:07 np0005485008 systemd: Mounting Huge Pages File System...
Oct 13 10:41:07 np0005485008 systemd: Mounting POSIX Message Queue File System...
Oct 13 10:41:07 np0005485008 systemd: Mounting Kernel Debug File System...
Oct 13 10:41:07 np0005485008 systemd: Mounting Kernel Trace File System...
Oct 13 10:41:07 np0005485008 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 13 10:41:07 np0005485008 systemd: Starting Create List of Static Device Nodes...
Oct 13 10:41:07 np0005485008 systemd: Starting Load Kernel Module configfs...
Oct 13 10:41:07 np0005485008 systemd: Starting Load Kernel Module drm...
Oct 13 10:41:07 np0005485008 systemd: Starting Load Kernel Module efi_pstore...
Oct 13 10:41:07 np0005485008 systemd: Starting Load Kernel Module fuse...
Oct 13 10:41:07 np0005485008 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct 13 10:41:07 np0005485008 systemd: systemd-fsck-root.service: Deactivated successfully.
Oct 13 10:41:07 np0005485008 systemd: Stopped File System Check on Root Device.
Oct 13 10:41:07 np0005485008 systemd: Stopped Journal Service.
Oct 13 10:41:07 np0005485008 kernel: fuse: init (API version 7.37)
Oct 13 10:41:07 np0005485008 systemd: Starting Journal Service...
Oct 13 10:41:07 np0005485008 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 13 10:41:07 np0005485008 systemd: Starting Generate network units from Kernel command line...
Oct 13 10:41:07 np0005485008 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 13 10:41:07 np0005485008 systemd: Starting Remount Root and Kernel File Systems...
Oct 13 10:41:07 np0005485008 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct 13 10:41:07 np0005485008 systemd: Starting Apply Kernel Variables...
Oct 13 10:41:07 np0005485008 systemd: Starting Coldplug All udev Devices...
Oct 13 10:41:07 np0005485008 kernel: ACPI: bus type drm_connector registered
Oct 13 10:41:07 np0005485008 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct 13 10:41:07 np0005485008 systemd: Mounted Huge Pages File System.
Oct 13 10:41:07 np0005485008 systemd: Mounted POSIX Message Queue File System.
Oct 13 10:41:07 np0005485008 systemd: Mounted Kernel Debug File System.
Oct 13 10:41:07 np0005485008 systemd: Mounted Kernel Trace File System.
Oct 13 10:41:07 np0005485008 systemd: Finished Create List of Static Device Nodes.
Oct 13 10:41:07 np0005485008 systemd-journald[677]: Journal started
Oct 13 10:41:07 np0005485008 systemd-journald[677]: Runtime Journal (/run/log/journal/a1727ec20198bc6caf436a6e13c4ff5e) is 8.0M, max 153.6M, 145.6M free.
Oct 13 10:41:07 np0005485008 systemd[1]: Queued start job for default target Multi-User System.
Oct 13 10:41:07 np0005485008 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct 13 10:41:07 np0005485008 systemd: Started Journal Service.
Oct 13 10:41:07 np0005485008 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 13 10:41:07 np0005485008 systemd[1]: Finished Load Kernel Module configfs.
Oct 13 10:41:07 np0005485008 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct 13 10:41:07 np0005485008 systemd[1]: Finished Load Kernel Module drm.
Oct 13 10:41:07 np0005485008 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct 13 10:41:07 np0005485008 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct 13 10:41:07 np0005485008 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct 13 10:41:07 np0005485008 systemd[1]: Finished Load Kernel Module fuse.
Oct 13 10:41:07 np0005485008 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct 13 10:41:07 np0005485008 systemd[1]: Finished Generate network units from Kernel command line.
Oct 13 10:41:07 np0005485008 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct 13 10:41:07 np0005485008 systemd[1]: Finished Apply Kernel Variables.
Oct 13 10:41:07 np0005485008 systemd[1]: Mounting FUSE Control File System...
Oct 13 10:41:07 np0005485008 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 13 10:41:07 np0005485008 systemd[1]: Starting Rebuild Hardware Database...
Oct 13 10:41:07 np0005485008 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct 13 10:41:07 np0005485008 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct 13 10:41:07 np0005485008 systemd[1]: Starting Load/Save OS Random Seed...
Oct 13 10:41:07 np0005485008 systemd[1]: Starting Create System Users...
Oct 13 10:41:07 np0005485008 systemd-journald[677]: Runtime Journal (/run/log/journal/a1727ec20198bc6caf436a6e13c4ff5e) is 8.0M, max 153.6M, 145.6M free.
Oct 13 10:41:07 np0005485008 systemd-journald[677]: Received client request to flush runtime journal.
Oct 13 10:41:07 np0005485008 systemd[1]: Mounted FUSE Control File System.
Oct 13 10:41:07 np0005485008 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct 13 10:41:07 np0005485008 systemd[1]: Finished Load/Save OS Random Seed.
Oct 13 10:41:07 np0005485008 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 13 10:41:07 np0005485008 systemd[1]: Finished Create System Users.
Oct 13 10:41:07 np0005485008 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 13 10:41:07 np0005485008 systemd[1]: Finished Coldplug All udev Devices.
Oct 13 10:41:07 np0005485008 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 13 10:41:07 np0005485008 systemd[1]: Reached target Preparation for Local File Systems.
Oct 13 10:41:07 np0005485008 systemd[1]: Reached target Local File Systems.
Oct 13 10:41:07 np0005485008 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct 13 10:41:07 np0005485008 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct 13 10:41:07 np0005485008 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct 13 10:41:07 np0005485008 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct 13 10:41:07 np0005485008 systemd[1]: Starting Automatic Boot Loader Update...
Oct 13 10:41:07 np0005485008 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct 13 10:41:07 np0005485008 systemd[1]: Starting Create Volatile Files and Directories...
Oct 13 10:41:07 np0005485008 bootctl[694]: Couldn't find EFI system partition, skipping.
Oct 13 10:41:07 np0005485008 systemd[1]: Finished Automatic Boot Loader Update.
Oct 13 10:41:07 np0005485008 systemd[1]: Finished Create Volatile Files and Directories.
Oct 13 10:41:07 np0005485008 systemd[1]: Starting Security Auditing Service...
Oct 13 10:41:07 np0005485008 systemd[1]: Starting RPC Bind...
Oct 13 10:41:07 np0005485008 systemd[1]: Starting Rebuild Journal Catalog...
Oct 13 10:41:07 np0005485008 auditd[700]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct 13 10:41:07 np0005485008 auditd[700]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct 13 10:41:07 np0005485008 systemd[1]: Finished Rebuild Journal Catalog.
Oct 13 10:41:08 np0005485008 systemd[1]: Started RPC Bind.
Oct 13 10:41:08 np0005485008 augenrules[705]: /sbin/augenrules: No change
Oct 13 10:41:08 np0005485008 augenrules[720]: No rules
Oct 13 10:41:08 np0005485008 augenrules[720]: enabled 1
Oct 13 10:41:08 np0005485008 augenrules[720]: failure 1
Oct 13 10:41:08 np0005485008 augenrules[720]: pid 700
Oct 13 10:41:08 np0005485008 augenrules[720]: rate_limit 0
Oct 13 10:41:08 np0005485008 augenrules[720]: backlog_limit 8192
Oct 13 10:41:08 np0005485008 augenrules[720]: lost 0
Oct 13 10:41:08 np0005485008 augenrules[720]: backlog 0
Oct 13 10:41:08 np0005485008 augenrules[720]: backlog_wait_time 60000
Oct 13 10:41:08 np0005485008 augenrules[720]: backlog_wait_time_actual 0
Oct 13 10:41:08 np0005485008 augenrules[720]: enabled 1
Oct 13 10:41:08 np0005485008 augenrules[720]: failure 1
Oct 13 10:41:08 np0005485008 augenrules[720]: pid 700
Oct 13 10:41:08 np0005485008 augenrules[720]: rate_limit 0
Oct 13 10:41:08 np0005485008 augenrules[720]: backlog_limit 8192
Oct 13 10:41:08 np0005485008 augenrules[720]: lost 0
Oct 13 10:41:08 np0005485008 augenrules[720]: backlog 0
Oct 13 10:41:08 np0005485008 augenrules[720]: backlog_wait_time 60000
Oct 13 10:41:08 np0005485008 augenrules[720]: backlog_wait_time_actual 0
Oct 13 10:41:08 np0005485008 augenrules[720]: enabled 1
Oct 13 10:41:08 np0005485008 augenrules[720]: failure 1
Oct 13 10:41:08 np0005485008 augenrules[720]: pid 700
Oct 13 10:41:08 np0005485008 augenrules[720]: rate_limit 0
Oct 13 10:41:08 np0005485008 augenrules[720]: backlog_limit 8192
Oct 13 10:41:08 np0005485008 augenrules[720]: lost 0
Oct 13 10:41:08 np0005485008 augenrules[720]: backlog 0
Oct 13 10:41:08 np0005485008 augenrules[720]: backlog_wait_time 60000
Oct 13 10:41:08 np0005485008 augenrules[720]: backlog_wait_time_actual 0
Oct 13 10:41:08 np0005485008 systemd[1]: Started Security Auditing Service.
Oct 13 10:41:08 np0005485008 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct 13 10:41:08 np0005485008 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct 13 10:41:08 np0005485008 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct 13 10:41:08 np0005485008 systemd[1]: Finished Rebuild Hardware Database.
Oct 13 10:41:08 np0005485008 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 13 10:41:08 np0005485008 systemd[1]: Starting Update is Completed...
Oct 13 10:41:08 np0005485008 systemd[1]: Finished Update is Completed.
Oct 13 10:41:08 np0005485008 systemd-udevd[728]: Using default interface naming scheme 'rhel-9.0'.
Oct 13 10:41:08 np0005485008 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 13 10:41:08 np0005485008 systemd[1]: Reached target System Initialization.
Oct 13 10:41:08 np0005485008 systemd[1]: Started dnf makecache --timer.
Oct 13 10:41:08 np0005485008 systemd[1]: Started Daily rotation of log files.
Oct 13 10:41:08 np0005485008 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct 13 10:41:08 np0005485008 systemd[1]: Reached target Timer Units.
Oct 13 10:41:08 np0005485008 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 13 10:41:08 np0005485008 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct 13 10:41:08 np0005485008 systemd[1]: Reached target Socket Units.
Oct 13 10:41:08 np0005485008 systemd[1]: Starting D-Bus System Message Bus...
Oct 13 10:41:08 np0005485008 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 13 10:41:08 np0005485008 systemd[1]: Starting Load Kernel Module configfs...
Oct 13 10:41:08 np0005485008 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 13 10:41:08 np0005485008 systemd[1]: Finished Load Kernel Module configfs.
Oct 13 10:41:08 np0005485008 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct 13 10:41:08 np0005485008 systemd-udevd[739]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 10:41:08 np0005485008 systemd[1]: Started D-Bus System Message Bus.
Oct 13 10:41:08 np0005485008 systemd[1]: Reached target Basic System.
Oct 13 10:41:08 np0005485008 dbus-broker-lau[764]: Ready
Oct 13 10:41:08 np0005485008 systemd[1]: Starting NTP client/server...
Oct 13 10:41:08 np0005485008 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct 13 10:41:08 np0005485008 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct 13 10:41:08 np0005485008 systemd[1]: Starting IPv4 firewall with iptables...
Oct 13 10:41:08 np0005485008 systemd[1]: Started irqbalance daemon.
Oct 13 10:41:08 np0005485008 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct 13 10:41:08 np0005485008 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 13 10:41:08 np0005485008 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 13 10:41:08 np0005485008 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 13 10:41:08 np0005485008 systemd[1]: Reached target sshd-keygen.target.
Oct 13 10:41:08 np0005485008 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct 13 10:41:08 np0005485008 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct 13 10:41:08 np0005485008 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct 13 10:41:08 np0005485008 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct 13 10:41:08 np0005485008 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct 13 10:41:08 np0005485008 systemd[1]: Reached target User and Group Name Lookups.
Oct 13 10:41:08 np0005485008 systemd[1]: Starting User Login Management...
Oct 13 10:41:08 np0005485008 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct 13 10:41:08 np0005485008 chronyd[797]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 13 10:41:08 np0005485008 chronyd[797]: Loaded 0 symmetric keys
Oct 13 10:41:08 np0005485008 chronyd[797]: Using right/UTC timezone to obtain leap second data
Oct 13 10:41:08 np0005485008 systemd[1]: Started NTP client/server.
Oct 13 10:41:08 np0005485008 chronyd[797]: Loaded seccomp filter (level 2)
Oct 13 10:41:08 np0005485008 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct 13 10:41:08 np0005485008 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct 13 10:41:08 np0005485008 systemd-logind[784]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 13 10:41:08 np0005485008 systemd-logind[784]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 13 10:41:08 np0005485008 systemd-logind[784]: New seat seat0.
Oct 13 10:41:08 np0005485008 systemd[1]: Started User Login Management.
Oct 13 10:41:08 np0005485008 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct 13 10:41:08 np0005485008 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct 13 10:41:08 np0005485008 kernel: Console: switching to colour dummy device 80x25
Oct 13 10:41:08 np0005485008 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct 13 10:41:08 np0005485008 kernel: [drm] features: -context_init
Oct 13 10:41:08 np0005485008 kernel: [drm] number of scanouts: 1
Oct 13 10:41:08 np0005485008 kernel: [drm] number of cap sets: 0
Oct 13 10:41:08 np0005485008 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct 13 10:41:08 np0005485008 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct 13 10:41:08 np0005485008 kernel: Console: switching to colour frame buffer device 128x48
Oct 13 10:41:08 np0005485008 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct 13 10:41:08 np0005485008 kernel: kvm_amd: TSC scaling supported
Oct 13 10:41:08 np0005485008 kernel: kvm_amd: Nested Virtualization enabled
Oct 13 10:41:08 np0005485008 kernel: kvm_amd: Nested Paging enabled
Oct 13 10:41:08 np0005485008 kernel: kvm_amd: LBR virtualization supported
Oct 13 10:41:08 np0005485008 iptables.init[777]: iptables: Applying firewall rules: [  OK  ]
Oct 13 10:41:08 np0005485008 systemd[1]: Finished IPv4 firewall with iptables.
Oct 13 10:41:09 np0005485008 cloud-init[838]: Cloud-init v. 24.4-7.el9 running 'init-local' at Mon, 13 Oct 2025 14:41:09 +0000. Up 6.89 seconds.
Oct 13 10:41:09 np0005485008 systemd[1]: run-cloud\x2dinit-tmp-tmp4nuqca6m.mount: Deactivated successfully.
Oct 13 10:41:09 np0005485008 systemd[1]: Starting Hostname Service...
Oct 13 10:41:09 np0005485008 systemd[1]: Started Hostname Service.
Oct 13 10:41:09 np0005485008 systemd-hostnamed[852]: Hostname set to <np0005485008.novalocal> (static)
Oct 13 10:41:09 np0005485008 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct 13 10:41:09 np0005485008 systemd[1]: Reached target Preparation for Network.
Oct 13 10:41:09 np0005485008 systemd[1]: Starting Network Manager...
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.7947] NetworkManager (version 1.54.1-1.el9) is starting... (boot:19574273-8afa-412a-830f-e7555ab9fd7b)
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.7957] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8163] manager[0x55632a04f080]: monitoring kernel firmware directory '/lib/firmware'.
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8232] hostname: hostname: using hostnamed
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8233] hostname: static hostname changed from (none) to "np0005485008.novalocal"
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8238] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8385] manager[0x55632a04f080]: rfkill: Wi-Fi hardware radio set enabled
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8386] manager[0x55632a04f080]: rfkill: WWAN hardware radio set enabled
Oct 13 10:41:09 np0005485008 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8471] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8471] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8472] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8472] manager: Networking is enabled by state file
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8473] settings: Loaded settings plugin: keyfile (internal)
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8508] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8540] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8564] dhcp: init: Using DHCP client 'internal'
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8567] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8581] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8600] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8608] device (lo): Activation: starting connection 'lo' (05f88e04-4bde-42e0-981d-e26944b28dbb)
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8616] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8620] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8670] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8677] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 13 10:41:09 np0005485008 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8683] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8686] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8690] device (eth0): carrier: link connected
Oct 13 10:41:09 np0005485008 systemd[1]: Started Network Manager.
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8702] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8707] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 13 10:41:09 np0005485008 systemd[1]: Reached target Network.
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8723] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8728] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8731] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8736] manager: NetworkManager state is now CONNECTING
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8739] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8747] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8752] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 13 10:41:09 np0005485008 systemd[1]: Starting Network Manager Wait Online...
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8802] dhcp4 (eth0): state changed new lease, address=38.102.83.17
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8811] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 13 10:41:09 np0005485008 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8841] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 13 10:41:09 np0005485008 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8962] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8971] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8983] device (lo): Activation: successful, device activated.
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.8992] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.9019] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.9027] manager: NetworkManager state is now CONNECTED_SITE
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.9034] device (eth0): Activation: successful, device activated.
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.9040] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 13 10:41:09 np0005485008 systemd[1]: Started GSSAPI Proxy Daemon.
Oct 13 10:41:09 np0005485008 NetworkManager[856]: <info>  [1760366469.9045] manager: startup complete
Oct 13 10:41:09 np0005485008 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 13 10:41:09 np0005485008 systemd[1]: Reached target NFS client services.
Oct 13 10:41:09 np0005485008 systemd[1]: Reached target Preparation for Remote File Systems.
Oct 13 10:41:09 np0005485008 systemd[1]: Reached target Remote File Systems.
Oct 13 10:41:09 np0005485008 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 13 10:41:09 np0005485008 systemd[1]: Finished Network Manager Wait Online.
Oct 13 10:41:09 np0005485008 systemd[1]: Starting Cloud-init: Network Stage...
Oct 13 10:41:10 np0005485008 cloud-init[917]: Cloud-init v. 24.4-7.el9 running 'init' at Mon, 13 Oct 2025 14:41:10 +0000. Up 7.94 seconds.
Oct 13 10:41:10 np0005485008 cloud-init[917]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct 13 10:41:10 np0005485008 cloud-init[917]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 13 10:41:10 np0005485008 cloud-init[917]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct 13 10:41:10 np0005485008 cloud-init[917]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 13 10:41:10 np0005485008 cloud-init[917]: ci-info: |  eth0  | True |         38.102.83.17         | 255.255.255.0 | global | fa:16:3e:8f:53:54 |
Oct 13 10:41:10 np0005485008 cloud-init[917]: ci-info: |  eth0  | True | fe80::f816:3eff:fe8f:5354/64 |       .       |  link  | fa:16:3e:8f:53:54 |
Oct 13 10:41:10 np0005485008 cloud-init[917]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct 13 10:41:10 np0005485008 cloud-init[917]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct 13 10:41:10 np0005485008 cloud-init[917]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 13 10:41:10 np0005485008 cloud-init[917]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct 13 10:41:10 np0005485008 cloud-init[917]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 13 10:41:10 np0005485008 cloud-init[917]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Oct 13 10:41:10 np0005485008 cloud-init[917]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 13 10:41:10 np0005485008 cloud-init[917]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Oct 13 10:41:10 np0005485008 cloud-init[917]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct 13 10:41:10 np0005485008 cloud-init[917]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Oct 13 10:41:10 np0005485008 cloud-init[917]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 13 10:41:10 np0005485008 cloud-init[917]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct 13 10:41:10 np0005485008 cloud-init[917]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 13 10:41:10 np0005485008 cloud-init[917]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct 13 10:41:10 np0005485008 cloud-init[917]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 13 10:41:10 np0005485008 cloud-init[917]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct 13 10:41:10 np0005485008 cloud-init[917]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Oct 13 10:41:10 np0005485008 cloud-init[917]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 13 10:41:11 np0005485008 cloud-init[917]: Generating public/private rsa key pair.
Oct 13 10:41:11 np0005485008 cloud-init[917]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct 13 10:41:11 np0005485008 cloud-init[917]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct 13 10:41:11 np0005485008 cloud-init[917]: The key fingerprint is:
Oct 13 10:41:11 np0005485008 cloud-init[917]: SHA256:v+9Mgr3iIfgPUlh913LZ5FHJUYCmsm7h/7bNUyk07fI root@np0005485008.novalocal
Oct 13 10:41:11 np0005485008 cloud-init[917]: The key's randomart image is:
Oct 13 10:41:11 np0005485008 cloud-init[917]: +---[RSA 3072]----+
Oct 13 10:41:11 np0005485008 cloud-init[917]: |             .o+O|
Oct 13 10:41:11 np0005485008 cloud-init[917]: |       .    o. B.|
Oct 13 10:41:11 np0005485008 cloud-init[917]: |      . . .oo = o|
Oct 13 10:41:11 np0005485008 cloud-init[917]: |     o  .... = . |
Oct 13 10:41:11 np0005485008 cloud-init[917]: |    . . So  . o .|
Oct 13 10:41:11 np0005485008 cloud-init[917]: |     o  o+   o o.|
Oct 13 10:41:11 np0005485008 cloud-init[917]: |    o ooo.+ . +. |
Oct 13 10:41:11 np0005485008 cloud-init[917]: |     o o=. *.o.E |
Oct 13 10:41:11 np0005485008 cloud-init[917]: |      .+oo+=*.o. |
Oct 13 10:41:11 np0005485008 cloud-init[917]: +----[SHA256]-----+
Oct 13 10:41:11 np0005485008 cloud-init[917]: Generating public/private ecdsa key pair.
Oct 13 10:41:11 np0005485008 cloud-init[917]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct 13 10:41:11 np0005485008 cloud-init[917]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct 13 10:41:11 np0005485008 cloud-init[917]: The key fingerprint is:
Oct 13 10:41:11 np0005485008 cloud-init[917]: SHA256:BDHxPixDcBTRV1Ihb0fyWYz0lbC8CAedz4Po/nG/aVk root@np0005485008.novalocal
Oct 13 10:41:11 np0005485008 cloud-init[917]: The key's randomart image is:
Oct 13 10:41:11 np0005485008 cloud-init[917]: +---[ECDSA 256]---+
Oct 13 10:41:11 np0005485008 cloud-init[917]: |    ..X* .+o*++o+|
Oct 13 10:41:11 np0005485008 cloud-init[917]: |     o +. o*.+o=o|
Oct 13 10:41:11 np0005485008 cloud-init[917]: |      . oo..*o+ .|
Oct 13 10:41:11 np0005485008 cloud-init[917]: |     . + .oo.=.  |
Oct 13 10:41:11 np0005485008 cloud-init[917]: |      o S  . ..  |
Oct 13 10:41:11 np0005485008 cloud-init[917]: |       o o      E|
Oct 13 10:41:11 np0005485008 cloud-init[917]: |        .  . .  o|
Oct 13 10:41:11 np0005485008 cloud-init[917]: |         .  o .o.|
Oct 13 10:41:11 np0005485008 cloud-init[917]: |          ..  .+.|
Oct 13 10:41:11 np0005485008 cloud-init[917]: +----[SHA256]-----+
Oct 13 10:41:11 np0005485008 cloud-init[917]: Generating public/private ed25519 key pair.
Oct 13 10:41:11 np0005485008 cloud-init[917]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct 13 10:41:11 np0005485008 cloud-init[917]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct 13 10:41:11 np0005485008 cloud-init[917]: The key fingerprint is:
Oct 13 10:41:11 np0005485008 cloud-init[917]: SHA256:2MclEG1xm8G7r8IcHEjIXZkPviKFfrLLCepAwBrNqgs root@np0005485008.novalocal
Oct 13 10:41:11 np0005485008 cloud-init[917]: The key's randomart image is:
Oct 13 10:41:11 np0005485008 cloud-init[917]: +--[ED25519 256]--+
Oct 13 10:41:11 np0005485008 cloud-init[917]: |     . oo+o=o    |
Oct 13 10:41:11 np0005485008 cloud-init[917]: |.o    o o.*..+   |
Oct 13 10:41:11 np0005485008 cloud-init[917]: |o.o    o +.o+.   |
Oct 13 10:41:11 np0005485008 cloud-init[917]: |oo    .oo.ooo    |
Oct 13 10:41:11 np0005485008 cloud-init[917]: |o.   ...S.oo .   |
Oct 13 10:41:11 np0005485008 cloud-init[917]: |o     + o.+ .    |
Oct 13 10:41:11 np0005485008 cloud-init[917]: |E   .  = + . .   |
Oct 13 10:41:11 np0005485008 cloud-init[917]: |.o . o..  +   .  |
Oct 13 10:41:11 np0005485008 cloud-init[917]: |..o   +.   ...   |
Oct 13 10:41:11 np0005485008 cloud-init[917]: +----[SHA256]-----+
Oct 13 10:41:11 np0005485008 systemd[1]: Finished Cloud-init: Network Stage.
Oct 13 10:41:11 np0005485008 systemd[1]: Reached target Cloud-config availability.
Oct 13 10:41:11 np0005485008 systemd[1]: Reached target Network is Online.
Oct 13 10:41:11 np0005485008 systemd[1]: Starting Cloud-init: Config Stage...
Oct 13 10:41:11 np0005485008 systemd[1]: Starting Notify NFS peers of a restart...
Oct 13 10:41:11 np0005485008 systemd[1]: Starting System Logging Service...
Oct 13 10:41:11 np0005485008 systemd[1]: Starting OpenSSH server daemon...
Oct 13 10:41:11 np0005485008 sm-notify[999]: Version 2.5.4 starting
Oct 13 10:41:11 np0005485008 systemd[1]: Starting Permit User Sessions...
Oct 13 10:41:11 np0005485008 systemd[1]: Started Notify NFS peers of a restart.
Oct 13 10:41:11 np0005485008 systemd[1]: Started OpenSSH server daemon.
Oct 13 10:41:11 np0005485008 systemd[1]: Finished Permit User Sessions.
Oct 13 10:41:11 np0005485008 systemd[1]: Started Command Scheduler.
Oct 13 10:41:11 np0005485008 systemd[1]: Started Getty on tty1.
Oct 13 10:41:11 np0005485008 systemd[1]: Started Serial Getty on ttyS0.
Oct 13 10:41:11 np0005485008 systemd[1]: Reached target Login Prompts.
Oct 13 10:41:11 np0005485008 rsyslogd[1000]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1000" x-info="https://www.rsyslog.com"] start
Oct 13 10:41:11 np0005485008 rsyslogd[1000]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct 13 10:41:11 np0005485008 systemd[1]: Started System Logging Service.
Oct 13 10:41:11 np0005485008 systemd[1]: Reached target Multi-User System.
Oct 13 10:41:11 np0005485008 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct 13 10:41:11 np0005485008 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct 13 10:41:11 np0005485008 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct 13 10:41:11 np0005485008 rsyslogd[1000]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 10:41:11 np0005485008 cloud-init[1013]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Mon, 13 Oct 2025 14:41:11 +0000. Up 9.65 seconds.
Oct 13 10:41:12 np0005485008 systemd[1]: Finished Cloud-init: Config Stage.
Oct 13 10:41:12 np0005485008 systemd[1]: Starting Cloud-init: Final Stage...
Oct 13 10:41:12 np0005485008 cloud-init[1017]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Mon, 13 Oct 2025 14:41:12 +0000. Up 10.04 seconds.
Oct 13 10:41:12 np0005485008 cloud-init[1019]: #############################################################
Oct 13 10:41:12 np0005485008 cloud-init[1020]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct 13 10:41:12 np0005485008 cloud-init[1022]: 256 SHA256:BDHxPixDcBTRV1Ihb0fyWYz0lbC8CAedz4Po/nG/aVk root@np0005485008.novalocal (ECDSA)
Oct 13 10:41:12 np0005485008 cloud-init[1024]: 256 SHA256:2MclEG1xm8G7r8IcHEjIXZkPviKFfrLLCepAwBrNqgs root@np0005485008.novalocal (ED25519)
Oct 13 10:41:12 np0005485008 cloud-init[1026]: 3072 SHA256:v+9Mgr3iIfgPUlh913LZ5FHJUYCmsm7h/7bNUyk07fI root@np0005485008.novalocal (RSA)
Oct 13 10:41:12 np0005485008 cloud-init[1027]: -----END SSH HOST KEY FINGERPRINTS-----
Oct 13 10:41:12 np0005485008 cloud-init[1028]: #############################################################
Oct 13 10:41:12 np0005485008 cloud-init[1017]: Cloud-init v. 24.4-7.el9 finished at Mon, 13 Oct 2025 14:41:12 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.28 seconds
Oct 13 10:41:12 np0005485008 systemd[1]: Finished Cloud-init: Final Stage.
Oct 13 10:41:12 np0005485008 systemd[1]: Reached target Cloud-init target.
Oct 13 10:41:12 np0005485008 systemd[1]: Startup finished in 1.633s (kernel) + 2.818s (initrd) + 5.906s (userspace) = 10.358s.
Oct 13 10:41:14 np0005485008 chronyd[797]: Selected source 206.108.0.131 (2.centos.pool.ntp.org)
Oct 13 10:41:14 np0005485008 chronyd[797]: System clock TAI offset set to 37 seconds
Oct 13 10:41:19 np0005485008 irqbalance[779]: Cannot change IRQ 25 affinity: Operation not permitted
Oct 13 10:41:19 np0005485008 irqbalance[779]: IRQ 25 affinity is now unmanaged
Oct 13 10:41:19 np0005485008 irqbalance[779]: Cannot change IRQ 31 affinity: Operation not permitted
Oct 13 10:41:19 np0005485008 irqbalance[779]: IRQ 31 affinity is now unmanaged
Oct 13 10:41:19 np0005485008 irqbalance[779]: Cannot change IRQ 28 affinity: Operation not permitted
Oct 13 10:41:19 np0005485008 irqbalance[779]: IRQ 28 affinity is now unmanaged
Oct 13 10:41:19 np0005485008 irqbalance[779]: Cannot change IRQ 32 affinity: Operation not permitted
Oct 13 10:41:19 np0005485008 irqbalance[779]: IRQ 32 affinity is now unmanaged
Oct 13 10:41:19 np0005485008 irqbalance[779]: Cannot change IRQ 30 affinity: Operation not permitted
Oct 13 10:41:19 np0005485008 irqbalance[779]: IRQ 30 affinity is now unmanaged
Oct 13 10:41:19 np0005485008 irqbalance[779]: Cannot change IRQ 29 affinity: Operation not permitted
Oct 13 10:41:19 np0005485008 irqbalance[779]: IRQ 29 affinity is now unmanaged
Oct 13 10:41:20 np0005485008 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 13 10:41:29 np0005485008 systemd-logind[784]: New session 1 of user zuul.
Oct 13 10:41:29 np0005485008 systemd[1]: Created slice User Slice of UID 1000.
Oct 13 10:41:29 np0005485008 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 13 10:41:29 np0005485008 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 13 10:41:29 np0005485008 systemd[1]: Starting User Manager for UID 1000...
Oct 13 10:41:30 np0005485008 systemd[1055]: Queued start job for default target Main User Target.
Oct 13 10:41:30 np0005485008 systemd[1055]: Created slice User Application Slice.
Oct 13 10:41:30 np0005485008 systemd[1055]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 13 10:41:30 np0005485008 systemd[1055]: Started Daily Cleanup of User's Temporary Directories.
Oct 13 10:41:30 np0005485008 systemd[1055]: Reached target Paths.
Oct 13 10:41:30 np0005485008 systemd[1055]: Reached target Timers.
Oct 13 10:41:30 np0005485008 systemd[1055]: Starting D-Bus User Message Bus Socket...
Oct 13 10:41:30 np0005485008 systemd[1055]: Starting Create User's Volatile Files and Directories...
Oct 13 10:41:30 np0005485008 systemd[1055]: Finished Create User's Volatile Files and Directories.
Oct 13 10:41:30 np0005485008 systemd[1055]: Listening on D-Bus User Message Bus Socket.
Oct 13 10:41:30 np0005485008 systemd[1055]: Reached target Sockets.
Oct 13 10:41:30 np0005485008 systemd[1055]: Reached target Basic System.
Oct 13 10:41:30 np0005485008 systemd[1055]: Reached target Main User Target.
Oct 13 10:41:30 np0005485008 systemd[1055]: Startup finished in 134ms.
Oct 13 10:41:30 np0005485008 systemd[1]: Started User Manager for UID 1000.
Oct 13 10:41:30 np0005485008 systemd[1]: Started Session 1 of User zuul.
Oct 13 10:41:30 np0005485008 python3[1138]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 10:41:34 np0005485008 python3[1166]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 10:41:39 np0005485008 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 13 10:41:40 np0005485008 python3[1226]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 10:41:41 np0005485008 python3[1266]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct 13 10:41:42 np0005485008 python3[1292]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDulViuPGBVqdYX4HdNP0EPD5Tt2vR38X7hx5TEPtXZJI5hgyjaJu8xXDJw1AfFVaUytA4rNVT9OTLjFfPDEzowETvZOyj7ByrpiRarkR4tE4TCrp9yibmjcV1nkW1jNHgiR5HbNZgk93wiL6owxfwR6jko3iC32dbw5+a1olWHfy9uL1wr8dfKNJgpelmgXCEaKpGBHLfiSfKsgUf/WRLslSj245xC4SDnrfFygQFearT2eGZR67oIhU+FGE9V82Q49VEtDZMln+Gxn0i6ffSOe883r8INe70Xj4QZgmoAx9q6/ijpu1MA90tCmcRbpOj4S9DDMykAXJ4spOuqCqlu5ugw8j4SjCJ2jD/HRwtyh+0LTSrx6tlcY2zpBRvn2joYCbkklkmxXZAEdqqfI/EURrs3wKQMoM6etZdq9xqRgEazffkT/cphdjlpG94lE2rzBOSODHw9aKYmk8Zbe5AgKeOl+TjHMKmyLgZzumbo+6Y5Qrqv4x4Niz9bTjm37Ac= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:41:43 np0005485008 python3[1316]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:41:43 np0005485008 python3[1415]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 10:41:44 np0005485008 python3[1486]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760366503.5807662-230-100941095283781/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=0c93ea7e097645c6ba5ab07c3acb257d_id_rsa follow=False checksum=95bb85699c3c9eff588c87fc6167253b34548d59 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:41:44 np0005485008 python3[1609]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 10:41:45 np0005485008 python3[1680]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760366504.5237157-274-244275901231691/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=0c93ea7e097645c6ba5ab07c3acb257d_id_rsa.pub follow=False checksum=deb4fc86da2afd285bfe6420fee6cfc67311b96c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:41:46 np0005485008 python3[1728]: ansible-ping Invoked with data=pong
Oct 13 10:41:47 np0005485008 python3[1752]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 10:41:49 np0005485008 python3[1810]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct 13 10:41:50 np0005485008 python3[1842]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:41:50 np0005485008 python3[1866]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:41:51 np0005485008 python3[1890]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:41:51 np0005485008 python3[1914]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:41:51 np0005485008 python3[1938]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:41:51 np0005485008 python3[1962]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:41:53 np0005485008 python3[1988]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:41:54 np0005485008 python3[2066]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 10:41:54 np0005485008 python3[2139]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760366513.84936-27-259423142502069/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:41:55 np0005485008 python3[2187]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:41:55 np0005485008 python3[2211]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:41:56 np0005485008 python3[2235]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:41:56 np0005485008 python3[2259]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:41:56 np0005485008 python3[2283]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:41:56 np0005485008 python3[2307]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:41:57 np0005485008 python3[2331]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:41:57 np0005485008 python3[2355]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:41:57 np0005485008 python3[2379]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:41:58 np0005485008 python3[2403]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:41:58 np0005485008 python3[2427]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:41:58 np0005485008 python3[2451]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:41:58 np0005485008 python3[2475]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:41:59 np0005485008 python3[2499]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:41:59 np0005485008 python3[2523]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:41:59 np0005485008 python3[2547]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:42:00 np0005485008 python3[2571]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:42:00 np0005485008 python3[2595]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:42:00 np0005485008 python3[2619]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:42:00 np0005485008 python3[2643]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:42:01 np0005485008 python3[2667]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:42:01 np0005485008 python3[2691]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:42:01 np0005485008 python3[2715]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:42:01 np0005485008 python3[2739]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:42:02 np0005485008 python3[2763]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:42:02 np0005485008 python3[2787]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:42:04 np0005485008 python3[2813]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 13 10:42:04 np0005485008 systemd[1]: Starting Time & Date Service...
Oct 13 10:42:04 np0005485008 systemd[1]: Started Time & Date Service.
Oct 13 10:42:04 np0005485008 systemd-timedated[2815]: Changed time zone to 'UTC' (UTC).
Oct 13 10:42:05 np0005485008 python3[2844]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:42:05 np0005485008 python3[2920]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 10:42:05 np0005485008 python3[2991]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1760366525.2790315-203-92501755433801/source _original_basename=tmpwz2u1u41 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:42:06 np0005485008 python3[3091]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 10:42:06 np0005485008 python3[3162]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760366526.178707-244-251751285254058/source _original_basename=tmpjb0jwqtb follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:42:07 np0005485008 python3[3264]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 10:42:08 np0005485008 python3[3337]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760366527.4604003-307-248287213179088/source _original_basename=tmpjlbwxh0w follow=False checksum=673d2f3d6c56c6a6f0fd71b2f865eaf754405451 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:42:08 np0005485008 python3[3385]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 10:42:08 np0005485008 python3[3411]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 10:42:09 np0005485008 python3[3491]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 10:42:09 np0005485008 python3[3564]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1760366529.090772-363-69373381015541/source _original_basename=tmpn55_1oe2 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:42:10 np0005485008 python3[3615]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-7373-f88d-00000000001e-1-compute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 10:42:10 np0005485008 python3[3643]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-7373-f88d-00000000001f-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct 13 10:42:12 np0005485008 python3[3671]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:42:34 np0005485008 python3[3697]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:42:34 np0005485008 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 13 10:43:34 np0005485008 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 13 10:43:34 np0005485008 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct 13 10:43:34 np0005485008 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct 13 10:43:34 np0005485008 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct 13 10:43:34 np0005485008 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct 13 10:43:34 np0005485008 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct 13 10:43:34 np0005485008 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct 13 10:43:34 np0005485008 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct 13 10:43:34 np0005485008 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct 13 10:43:34 np0005485008 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct 13 10:43:34 np0005485008 NetworkManager[856]: <info>  [1760366614.1525] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 13 10:43:34 np0005485008 systemd-udevd[3700]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 10:43:34 np0005485008 NetworkManager[856]: <info>  [1760366614.1836] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 13 10:43:34 np0005485008 NetworkManager[856]: <info>  [1760366614.1863] settings: (eth1): created default wired connection 'Wired connection 1'
Oct 13 10:43:34 np0005485008 NetworkManager[856]: <info>  [1760366614.1866] device (eth1): carrier: link connected
Oct 13 10:43:34 np0005485008 NetworkManager[856]: <info>  [1760366614.1868] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 13 10:43:34 np0005485008 NetworkManager[856]: <info>  [1760366614.1873] policy: auto-activating connection 'Wired connection 1' (9acb260b-8b23-3cab-89de-8291129a90a1)
Oct 13 10:43:34 np0005485008 NetworkManager[856]: <info>  [1760366614.1877] device (eth1): Activation: starting connection 'Wired connection 1' (9acb260b-8b23-3cab-89de-8291129a90a1)
Oct 13 10:43:34 np0005485008 NetworkManager[856]: <info>  [1760366614.1878] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 13 10:43:34 np0005485008 NetworkManager[856]: <info>  [1760366614.1880] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 13 10:43:34 np0005485008 NetworkManager[856]: <info>  [1760366614.1883] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 13 10:43:34 np0005485008 NetworkManager[856]: <info>  [1760366614.1887] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 13 10:43:34 np0005485008 systemd[1055]: Starting Mark boot as successful...
Oct 13 10:43:34 np0005485008 systemd[1055]: Finished Mark boot as successful.
Oct 13 10:43:34 np0005485008 systemd-logind[784]: Session 1 logged out. Waiting for processes to exit.
Oct 13 10:43:35 np0005485008 systemd-logind[784]: New session 3 of user zuul.
Oct 13 10:43:35 np0005485008 systemd[1]: Started Session 3 of User zuul.
Oct 13 10:43:35 np0005485008 python3[3732]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-ff13-8344-000000000173-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 10:43:42 np0005485008 python3[3813]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 10:43:42 np0005485008 python3[3886]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760366621.9330618-154-89521771295253/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=d502d62a565f07d3b77bb8caa8257e454b3ef1b0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:43:43 np0005485008 python3[3936]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 10:43:43 np0005485008 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 13 10:43:43 np0005485008 systemd[1]: Stopped Network Manager Wait Online.
Oct 13 10:43:43 np0005485008 systemd[1]: Stopping Network Manager Wait Online...
Oct 13 10:43:43 np0005485008 systemd[1]: Stopping Network Manager...
Oct 13 10:43:43 np0005485008 NetworkManager[856]: <info>  [1760366623.2381] caught SIGTERM, shutting down normally.
Oct 13 10:43:43 np0005485008 NetworkManager[856]: <info>  [1760366623.2395] dhcp4 (eth0): canceled DHCP transaction
Oct 13 10:43:43 np0005485008 NetworkManager[856]: <info>  [1760366623.2396] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 13 10:43:43 np0005485008 NetworkManager[856]: <info>  [1760366623.2396] dhcp4 (eth0): state changed no lease
Oct 13 10:43:43 np0005485008 NetworkManager[856]: <info>  [1760366623.2400] manager: NetworkManager state is now CONNECTING
Oct 13 10:43:43 np0005485008 NetworkManager[856]: <info>  [1760366623.2546] dhcp4 (eth1): canceled DHCP transaction
Oct 13 10:43:43 np0005485008 NetworkManager[856]: <info>  [1760366623.2546] dhcp4 (eth1): state changed no lease
Oct 13 10:43:43 np0005485008 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 13 10:43:43 np0005485008 NetworkManager[856]: <info>  [1760366623.2591] exiting (success)
Oct 13 10:43:43 np0005485008 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 13 10:43:43 np0005485008 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 13 10:43:43 np0005485008 systemd[1]: Stopped Network Manager.
Oct 13 10:43:43 np0005485008 systemd[1]: NetworkManager.service: Consumed 1.050s CPU time, 9.9M memory peak.
Oct 13 10:43:43 np0005485008 systemd[1]: Starting Network Manager...
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.3135] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:19574273-8afa-412a-830f-e7555ab9fd7b)
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.3138] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.3193] manager[0x559aca3ba070]: monitoring kernel firmware directory '/lib/firmware'.
Oct 13 10:43:43 np0005485008 systemd[1]: Starting Hostname Service...
Oct 13 10:43:43 np0005485008 systemd[1]: Started Hostname Service.
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4057] hostname: hostname: using hostnamed
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4058] hostname: static hostname changed from (none) to "np0005485008.novalocal"
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4065] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4072] manager[0x559aca3ba070]: rfkill: Wi-Fi hardware radio set enabled
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4072] manager[0x559aca3ba070]: rfkill: WWAN hardware radio set enabled
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4102] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4103] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4103] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4104] manager: Networking is enabled by state file
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4106] settings: Loaded settings plugin: keyfile (internal)
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4110] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4135] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4144] dhcp: init: Using DHCP client 'internal'
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4146] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4151] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4158] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4166] device (lo): Activation: starting connection 'lo' (05f88e04-4bde-42e0-981d-e26944b28dbb)
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4173] device (eth0): carrier: link connected
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4176] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4181] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4181] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4187] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4194] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4201] device (eth1): carrier: link connected
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4205] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4210] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (9acb260b-8b23-3cab-89de-8291129a90a1) (indicated)
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4210] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4217] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4225] device (eth1): Activation: starting connection 'Wired connection 1' (9acb260b-8b23-3cab-89de-8291129a90a1)
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4232] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 13 10:43:43 np0005485008 systemd[1]: Started Network Manager.
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4236] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4237] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4239] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4241] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4243] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4245] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4248] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4249] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4254] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4257] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4266] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4268] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4286] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4291] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4303] device (lo): Activation: successful, device activated.
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4313] dhcp4 (eth0): state changed new lease, address=38.102.83.17
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4322] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 13 10:43:43 np0005485008 systemd[1]: Starting Network Manager Wait Online...
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4392] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4423] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4426] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4430] manager: NetworkManager state is now CONNECTED_SITE
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4434] device (eth0): Activation: successful, device activated.
Oct 13 10:43:43 np0005485008 NetworkManager[3948]: <info>  [1760366623.4440] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 13 10:43:43 np0005485008 python3[4020]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-ff13-8344-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 10:43:53 np0005485008 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 13 10:44:13 np0005485008 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 13 10:44:28 np0005485008 NetworkManager[3948]: <info>  [1760366668.2755] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 13 10:44:28 np0005485008 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 13 10:44:28 np0005485008 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 13 10:44:28 np0005485008 NetworkManager[3948]: <info>  [1760366668.3023] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 13 10:44:28 np0005485008 NetworkManager[3948]: <info>  [1760366668.3025] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 13 10:44:28 np0005485008 NetworkManager[3948]: <info>  [1760366668.3041] device (eth1): Activation: successful, device activated.
Oct 13 10:44:28 np0005485008 NetworkManager[3948]: <info>  [1760366668.3048] manager: startup complete
Oct 13 10:44:28 np0005485008 NetworkManager[3948]: <info>  [1760366668.3051] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct 13 10:44:28 np0005485008 NetworkManager[3948]: <warn>  [1760366668.3060] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct 13 10:44:28 np0005485008 NetworkManager[3948]: <info>  [1760366668.3067] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct 13 10:44:28 np0005485008 systemd[1]: Finished Network Manager Wait Online.
Oct 13 10:44:28 np0005485008 NetworkManager[3948]: <info>  [1760366668.3182] dhcp4 (eth1): canceled DHCP transaction
Oct 13 10:44:28 np0005485008 NetworkManager[3948]: <info>  [1760366668.3183] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 13 10:44:28 np0005485008 NetworkManager[3948]: <info>  [1760366668.3183] dhcp4 (eth1): state changed no lease
Oct 13 10:44:28 np0005485008 NetworkManager[3948]: <info>  [1760366668.3204] policy: auto-activating connection 'ci-private-network' (9c569e06-dba0-5a1c-999c-691ec6c75ed2)
Oct 13 10:44:28 np0005485008 NetworkManager[3948]: <info>  [1760366668.3209] device (eth1): Activation: starting connection 'ci-private-network' (9c569e06-dba0-5a1c-999c-691ec6c75ed2)
Oct 13 10:44:28 np0005485008 NetworkManager[3948]: <info>  [1760366668.3210] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 13 10:44:28 np0005485008 NetworkManager[3948]: <info>  [1760366668.3213] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 13 10:44:28 np0005485008 NetworkManager[3948]: <info>  [1760366668.3220] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 13 10:44:28 np0005485008 NetworkManager[3948]: <info>  [1760366668.3229] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 13 10:44:28 np0005485008 NetworkManager[3948]: <info>  [1760366668.3283] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 13 10:44:28 np0005485008 NetworkManager[3948]: <info>  [1760366668.3287] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 13 10:44:28 np0005485008 NetworkManager[3948]: <info>  [1760366668.3293] device (eth1): Activation: successful, device activated.
Oct 13 10:44:38 np0005485008 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 13 10:44:43 np0005485008 systemd[1]: session-3.scope: Deactivated successfully.
Oct 13 10:44:43 np0005485008 systemd[1]: session-3.scope: Consumed 1.594s CPU time.
Oct 13 10:44:43 np0005485008 systemd-logind[784]: Session 3 logged out. Waiting for processes to exit.
Oct 13 10:44:43 np0005485008 systemd-logind[784]: Removed session 3.
Oct 13 10:44:50 np0005485008 systemd-logind[784]: New session 4 of user zuul.
Oct 13 10:44:50 np0005485008 systemd[1]: Started Session 4 of User zuul.
Oct 13 10:44:50 np0005485008 python3[4131]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 10:44:50 np0005485008 python3[4204]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760366690.2494614-312-236106513976319/source _original_basename=tmph5nwyjg2 follow=False checksum=27ca396200a6bcfb8bb59e20e7ec6cb7b526c4e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:44:53 np0005485008 systemd[1]: session-4.scope: Deactivated successfully.
Oct 13 10:44:53 np0005485008 systemd-logind[784]: Session 4 logged out. Waiting for processes to exit.
Oct 13 10:44:53 np0005485008 systemd-logind[784]: Removed session 4.
Oct 13 10:47:16 np0005485008 systemd[1055]: Created slice User Background Tasks Slice.
Oct 13 10:47:16 np0005485008 systemd[1055]: Starting Cleanup of User's Temporary Files and Directories...
Oct 13 10:47:16 np0005485008 systemd[1055]: Finished Cleanup of User's Temporary Files and Directories.
Oct 13 10:49:54 np0005485008 systemd-logind[784]: New session 5 of user zuul.
Oct 13 10:49:54 np0005485008 systemd[1]: Started Session 5 of User zuul.
Oct 13 10:49:54 np0005485008 python3[4264]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-bab7-615d-000000000c97-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 10:49:55 np0005485008 python3[4292]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:49:55 np0005485008 python3[4318]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:49:55 np0005485008 python3[4345]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:49:56 np0005485008 python3[4371]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:49:56 np0005485008 python3[4397]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:49:56 np0005485008 python3[4397]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct 13 10:49:57 np0005485008 python3[4423]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 10:49:57 np0005485008 systemd[1]: Reloading.
Oct 13 10:49:57 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 10:49:59 np0005485008 python3[4479]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct 13 10:50:00 np0005485008 python3[4505]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 10:50:00 np0005485008 python3[4533]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 10:50:00 np0005485008 python3[4561]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 10:50:01 np0005485008 python3[4589]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 10:50:01 np0005485008 python3[4616]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-bab7-615d-000000000c9d-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 10:50:02 np0005485008 python3[4646]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 10:50:05 np0005485008 systemd[1]: session-5.scope: Deactivated successfully.
Oct 13 10:50:05 np0005485008 systemd[1]: session-5.scope: Consumed 3.885s CPU time.
Oct 13 10:50:05 np0005485008 systemd-logind[784]: Session 5 logged out. Waiting for processes to exit.
Oct 13 10:50:05 np0005485008 systemd-logind[784]: Removed session 5.
Oct 13 10:50:06 np0005485008 systemd-logind[784]: New session 6 of user zuul.
Oct 13 10:50:06 np0005485008 systemd[1]: Started Session 6 of User zuul.
Oct 13 10:50:07 np0005485008 python3[4680]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 13 10:50:21 np0005485008 kernel: SELinux:  Converting 363 SID table entries...
Oct 13 10:50:21 np0005485008 kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 10:50:21 np0005485008 kernel: SELinux:  policy capability open_perms=1
Oct 13 10:50:21 np0005485008 kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 10:50:21 np0005485008 kernel: SELinux:  policy capability always_check_network=0
Oct 13 10:50:21 np0005485008 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 10:50:21 np0005485008 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 10:50:21 np0005485008 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 10:50:30 np0005485008 kernel: SELinux:  Converting 363 SID table entries...
Oct 13 10:50:30 np0005485008 kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 10:50:30 np0005485008 kernel: SELinux:  policy capability open_perms=1
Oct 13 10:50:30 np0005485008 kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 10:50:30 np0005485008 kernel: SELinux:  policy capability always_check_network=0
Oct 13 10:50:30 np0005485008 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 10:50:30 np0005485008 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 10:50:30 np0005485008 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 10:50:39 np0005485008 kernel: SELinux:  Converting 363 SID table entries...
Oct 13 10:50:39 np0005485008 kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 10:50:39 np0005485008 kernel: SELinux:  policy capability open_perms=1
Oct 13 10:50:39 np0005485008 kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 10:50:39 np0005485008 kernel: SELinux:  policy capability always_check_network=0
Oct 13 10:50:39 np0005485008 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 10:50:39 np0005485008 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 10:50:39 np0005485008 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 10:50:40 np0005485008 setsebool[4748]: The virt_use_nfs policy boolean was changed to 1 by root
Oct 13 10:50:40 np0005485008 setsebool[4748]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct 13 10:50:51 np0005485008 kernel: SELinux:  Converting 366 SID table entries...
Oct 13 10:50:51 np0005485008 kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 10:50:51 np0005485008 kernel: SELinux:  policy capability open_perms=1
Oct 13 10:50:51 np0005485008 kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 10:50:51 np0005485008 kernel: SELinux:  policy capability always_check_network=0
Oct 13 10:50:51 np0005485008 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 10:50:51 np0005485008 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 10:50:51 np0005485008 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 10:51:09 np0005485008 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 13 10:51:09 np0005485008 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 10:51:09 np0005485008 systemd[1]: Starting man-db-cache-update.service...
Oct 13 10:51:09 np0005485008 systemd[1]: Reloading.
Oct 13 10:51:09 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 10:51:09 np0005485008 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 13 10:51:10 np0005485008 systemd[1]: Starting PackageKit Daemon...
Oct 13 10:51:10 np0005485008 systemd[1]: Starting Authorization Manager...
Oct 13 10:51:10 np0005485008 polkitd[6368]: Started polkitd version 0.117
Oct 13 10:51:10 np0005485008 systemd[1]: Started Authorization Manager.
Oct 13 10:51:10 np0005485008 systemd[1]: Started PackageKit Daemon.
Oct 13 10:51:18 np0005485008 python3[11210]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-5a6b-c849-00000000000b-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 10:51:19 np0005485008 kernel: evm: overlay not supported
Oct 13 10:51:19 np0005485008 systemd[1055]: Starting D-Bus User Message Bus...
Oct 13 10:51:19 np0005485008 dbus-broker-launch[11720]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct 13 10:51:19 np0005485008 dbus-broker-launch[11720]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct 13 10:51:19 np0005485008 systemd[1055]: Started D-Bus User Message Bus.
Oct 13 10:51:19 np0005485008 dbus-broker-lau[11720]: Ready
Oct 13 10:51:19 np0005485008 systemd[1055]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 13 10:51:19 np0005485008 systemd[1055]: Created slice Slice /user.
Oct 13 10:51:19 np0005485008 systemd[1055]: podman-11647.scope: unit configures an IP firewall, but not running as root.
Oct 13 10:51:19 np0005485008 systemd[1055]: (This warning is only shown for the first unit using IP firewalling.)
Oct 13 10:51:19 np0005485008 systemd[1055]: Started podman-11647.scope.
Oct 13 10:51:19 np0005485008 systemd[1055]: Started podman-pause-4378cad9.scope.
Oct 13 10:51:20 np0005485008 python3[12178]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.129.56.38:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.129.56.38:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:51:20 np0005485008 systemd[1]: session-6.scope: Deactivated successfully.
Oct 13 10:51:20 np0005485008 systemd[1]: session-6.scope: Consumed 59.614s CPU time.
Oct 13 10:51:20 np0005485008 systemd-logind[784]: Session 6 logged out. Waiting for processes to exit.
Oct 13 10:51:20 np0005485008 systemd-logind[784]: Removed session 6.
Oct 13 10:51:44 np0005485008 systemd-logind[784]: New session 7 of user zuul.
Oct 13 10:51:44 np0005485008 systemd[1]: Started Session 7 of User zuul.
Oct 13 10:51:44 np0005485008 python3[21043]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLlw13lgE7PuOOrRpPaXfy1fiaAQ87XdyZWSvWT6Tue5kR/HuICmDMp9CG4FZK8eVOwC3hQ0PhN+FtsRnLTdqOM= zuul@np0005485006.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:51:45 np0005485008 python3[21228]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLlw13lgE7PuOOrRpPaXfy1fiaAQ87XdyZWSvWT6Tue5kR/HuICmDMp9CG4FZK8eVOwC3hQ0PhN+FtsRnLTdqOM= zuul@np0005485006.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:51:45 np0005485008 python3[21650]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005485008.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct 13 10:51:46 np0005485008 python3[21904]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLlw13lgE7PuOOrRpPaXfy1fiaAQ87XdyZWSvWT6Tue5kR/HuICmDMp9CG4FZK8eVOwC3hQ0PhN+FtsRnLTdqOM= zuul@np0005485006.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 13 10:51:47 np0005485008 python3[22198]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 10:51:47 np0005485008 python3[22473]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760367106.7657404-152-211853119327262/source _original_basename=tmpujhuckn7 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:51:48 np0005485008 python3[22864]: ansible-ansible.builtin.hostname Invoked with name=compute-1 use=systemd
Oct 13 10:51:48 np0005485008 systemd[1]: Starting Hostname Service...
Oct 13 10:51:48 np0005485008 systemd[1]: Started Hostname Service.
Oct 13 10:51:48 np0005485008 systemd-hostnamed[22978]: Changed pretty hostname to 'compute-1'
Oct 13 10:51:48 np0005485008 systemd-hostnamed[22978]: Hostname set to <compute-1> (static)
Oct 13 10:51:48 np0005485008 NetworkManager[3948]: <info>  [1760367108.6111] hostname: static hostname changed from "np0005485008.novalocal" to "compute-1"
Oct 13 10:51:48 np0005485008 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 13 10:51:48 np0005485008 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 13 10:51:49 np0005485008 systemd[1]: session-7.scope: Deactivated successfully.
Oct 13 10:51:49 np0005485008 systemd[1]: session-7.scope: Consumed 2.411s CPU time.
Oct 13 10:51:49 np0005485008 systemd-logind[784]: Session 7 logged out. Waiting for processes to exit.
Oct 13 10:51:49 np0005485008 systemd-logind[784]: Removed session 7.
Oct 13 10:51:58 np0005485008 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 13 10:51:59 np0005485008 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 13 10:51:59 np0005485008 systemd[1]: Finished man-db-cache-update.service.
Oct 13 10:51:59 np0005485008 systemd[1]: man-db-cache-update.service: Consumed 59.594s CPU time.
Oct 13 10:51:59 np0005485008 systemd[1]: run-r24df3e7d9c7c41d89a5747c28007e621.service: Deactivated successfully.
Oct 13 10:52:18 np0005485008 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 13 10:56:16 np0005485008 systemd[1]: Starting Cleanup of Temporary Directories...
Oct 13 10:56:16 np0005485008 systemd[1]: packagekit.service: Deactivated successfully.
Oct 13 10:56:16 np0005485008 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct 13 10:56:16 np0005485008 systemd[1]: Finished Cleanup of Temporary Directories.
Oct 13 10:56:16 np0005485008 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct 13 10:56:26 np0005485008 systemd-logind[784]: New session 8 of user zuul.
Oct 13 10:56:26 np0005485008 systemd[1]: Started Session 8 of User zuul.
Oct 13 10:56:27 np0005485008 python3[26629]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 10:56:29 np0005485008 python3[26745]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 10:56:29 np0005485008 python3[26818]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760367388.7290657-30438-69703911723822/source mode=0755 _original_basename=delorean.repo follow=False checksum=bb590c69a49e58fa27e01a012361d43ce3607f1e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:56:29 np0005485008 python3[26844]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 10:56:29 np0005485008 python3[26917]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760367388.7290657-30438-69703911723822/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:56:30 np0005485008 python3[26943]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 10:56:30 np0005485008 python3[27016]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760367388.7290657-30438-69703911723822/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:56:30 np0005485008 python3[27042]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 10:56:31 np0005485008 python3[27115]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760367388.7290657-30438-69703911723822/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:56:31 np0005485008 python3[27141]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 10:56:31 np0005485008 python3[27214]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760367388.7290657-30438-69703911723822/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:56:32 np0005485008 python3[27240]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 10:56:32 np0005485008 python3[27313]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760367388.7290657-30438-69703911723822/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:56:32 np0005485008 python3[27339]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 13 10:56:32 np0005485008 python3[27412]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760367388.7290657-30438-69703911723822/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=916c8083813384f9d9d3928ca4f7abdab735987d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 10:58:34 np0005485008 python3[27460]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:03:34 np0005485008 systemd[1]: session-8.scope: Deactivated successfully.
Oct 13 11:03:34 np0005485008 systemd[1]: session-8.scope: Consumed 4.896s CPU time.
Oct 13 11:03:34 np0005485008 systemd-logind[784]: Session 8 logged out. Waiting for processes to exit.
Oct 13 11:03:34 np0005485008 systemd-logind[784]: Removed session 8.
Oct 13 11:13:08 np0005485008 systemd-logind[784]: New session 9 of user zuul.
Oct 13 11:13:08 np0005485008 systemd[1]: Started Session 9 of User zuul.
Oct 13 11:13:09 np0005485008 python3.9[27636]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:13:10 np0005485008 python3.9[27817]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:13:18 np0005485008 systemd[1]: session-9.scope: Deactivated successfully.
Oct 13 11:13:18 np0005485008 systemd[1]: session-9.scope: Consumed 8.464s CPU time.
Oct 13 11:13:18 np0005485008 systemd-logind[784]: Session 9 logged out. Waiting for processes to exit.
Oct 13 11:13:18 np0005485008 systemd-logind[784]: Removed session 9.
Oct 13 11:13:23 np0005485008 systemd-logind[784]: New session 10 of user zuul.
Oct 13 11:13:23 np0005485008 systemd[1]: Started Session 10 of User zuul.
Oct 13 11:13:24 np0005485008 python3.9[28027]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:13:25 np0005485008 systemd[1]: session-10.scope: Deactivated successfully.
Oct 13 11:13:25 np0005485008 systemd-logind[784]: Session 10 logged out. Waiting for processes to exit.
Oct 13 11:13:25 np0005485008 systemd-logind[784]: Removed session 10.
Oct 13 11:13:41 np0005485008 systemd-logind[784]: New session 11 of user zuul.
Oct 13 11:13:41 np0005485008 systemd[1]: Started Session 11 of User zuul.
Oct 13 11:13:42 np0005485008 python3.9[28208]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 13 11:13:43 np0005485008 python3.9[28382]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:13:45 np0005485008 python3.9[28534]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:13:46 np0005485008 python3.9[28687]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:13:47 np0005485008 python3.9[28839]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:13:47 np0005485008 python3.9[28991]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:13:48 np0005485008 python3.9[29114]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760368427.3845673-131-136621047820809/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:13:49 np0005485008 python3.9[29266]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:13:50 np0005485008 python3.9[29422]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:13:51 np0005485008 python3.9[29572]: ansible-ansible.builtin.service_facts Invoked
Oct 13 11:13:56 np0005485008 python3.9[29829]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:13:57 np0005485008 python3.9[29979]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:13:58 np0005485008 python3.9[30133]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:13:59 np0005485008 python3.9[30291]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 11:14:01 np0005485008 python3.9[30375]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 11:14:47 np0005485008 systemd[1]: Reloading.
Oct 13 11:14:47 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:14:47 np0005485008 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct 13 11:14:47 np0005485008 systemd[1]: Reloading.
Oct 13 11:14:47 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:14:47 np0005485008 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct 13 11:14:47 np0005485008 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct 13 11:14:47 np0005485008 systemd[1]: Reloading.
Oct 13 11:14:47 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:14:48 np0005485008 systemd[1]: Starting dnf makecache...
Oct 13 11:14:48 np0005485008 systemd[1]: Listening on LVM2 poll daemon socket.
Oct 13 11:14:48 np0005485008 dnf[30654]: Failed determining last makecache time.
Oct 13 11:14:48 np0005485008 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Oct 13 11:14:48 np0005485008 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Oct 13 11:14:48 np0005485008 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Oct 13 11:14:48 np0005485008 dnf[30654]: delorean-openstack-barbican-42b4c41831408a8e323 115 kB/s | 3.0 kB     00:00
Oct 13 11:14:48 np0005485008 dnf[30654]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 115 kB/s | 3.0 kB     00:00
Oct 13 11:14:48 np0005485008 dnf[30654]: delorean-openstack-cinder-1c00d6490d88e436f26ef 127 kB/s | 3.0 kB     00:00
Oct 13 11:14:48 np0005485008 dnf[30654]: delorean-python-stevedore-c4acc5639fd2329372142 138 kB/s | 3.0 kB     00:00
Oct 13 11:14:48 np0005485008 dnf[30654]: delorean-python-observabilityclient-2f31846d73c 149 kB/s | 3.0 kB     00:00
Oct 13 11:14:48 np0005485008 dnf[30654]: delorean-diskimage-builder-7d793e664cf892461c55 156 kB/s | 3.0 kB     00:00
Oct 13 11:14:48 np0005485008 dnf[30654]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 162 kB/s | 3.0 kB     00:00
Oct 13 11:14:48 np0005485008 dnf[30654]: delorean-python-designate-tests-tempest-347fdbc 161 kB/s | 3.0 kB     00:00
Oct 13 11:14:48 np0005485008 dnf[30654]: delorean-openstack-glance-1fd12c29b339f30fe823e 156 kB/s | 3.0 kB     00:00
Oct 13 11:14:48 np0005485008 dnf[30654]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 147 kB/s | 3.0 kB     00:00
Oct 13 11:14:48 np0005485008 dnf[30654]: delorean-openstack-manila-3c01b7181572c95dac462 139 kB/s | 3.0 kB     00:00
Oct 13 11:14:48 np0005485008 dnf[30654]: delorean-python-networking-mlnx-d9ea9e7bae2aa29 155 kB/s | 3.0 kB     00:00
Oct 13 11:14:48 np0005485008 dnf[30654]: delorean-openstack-octavia-ba397f07a7331190208c 134 kB/s | 3.0 kB     00:00
Oct 13 11:14:48 np0005485008 dnf[30654]: delorean-openstack-watcher-c014f81a8647287f6dcc 135 kB/s | 3.0 kB     00:00
Oct 13 11:14:48 np0005485008 dnf[30654]: delorean-python-tcib-ff70d03bf5bc0bb6f3540a02d3 158 kB/s | 3.0 kB     00:00
Oct 13 11:14:48 np0005485008 dnf[30654]: delorean-puppet-ceph-91ba84bc002c318a7f961d084e 135 kB/s | 3.0 kB     00:00
Oct 13 11:14:48 np0005485008 dnf[30654]: delorean-openstack-swift-dc98a8463506ac520c469a 140 kB/s | 3.0 kB     00:00
Oct 13 11:14:48 np0005485008 dnf[30654]: delorean-python-tempestconf-8515371b7cceebd4282 138 kB/s | 3.0 kB     00:00
Oct 13 11:14:48 np0005485008 dnf[30654]: delorean-openstack-heat-ui-013accbfd179753bc3f0 152 kB/s | 3.0 kB     00:00
Oct 13 11:14:48 np0005485008 dnf[30654]: CentOS Stream 9 - BaseOS                         37 kB/s | 6.7 kB     00:00
Oct 13 11:14:49 np0005485008 dnf[30654]: CentOS Stream 9 - AppStream                      70 kB/s | 6.8 kB     00:00
Oct 13 11:14:49 np0005485008 dnf[30654]: CentOS Stream 9 - CRB                            67 kB/s | 6.6 kB     00:00
Oct 13 11:14:49 np0005485008 dnf[30654]: CentOS Stream 9 - Extras packages                81 kB/s | 8.0 kB     00:00
Oct 13 11:14:49 np0005485008 dnf[30654]: dlrn-antelope-testing                           168 kB/s | 3.0 kB     00:00
Oct 13 11:14:49 np0005485008 dnf[30654]: dlrn-antelope-build-deps                        165 kB/s | 3.0 kB     00:00
Oct 13 11:14:49 np0005485008 dnf[30654]: centos9-rabbitmq                                 95 kB/s | 3.0 kB     00:00
Oct 13 11:14:49 np0005485008 dnf[30654]: centos9-storage                                 140 kB/s | 3.0 kB     00:00
Oct 13 11:14:49 np0005485008 dnf[30654]: centos9-opstools                                127 kB/s | 3.0 kB     00:00
Oct 13 11:14:49 np0005485008 dnf[30654]: NFV SIG OpenvSwitch                             125 kB/s | 3.0 kB     00:00
Oct 13 11:14:49 np0005485008 dnf[30654]: repo-setup-centos-appstream                     159 kB/s | 4.4 kB     00:00
Oct 13 11:14:49 np0005485008 dnf[30654]: repo-setup-centos-baseos                        147 kB/s | 3.9 kB     00:00
Oct 13 11:14:49 np0005485008 dnf[30654]: repo-setup-centos-highavailability              132 kB/s | 3.9 kB     00:00
Oct 13 11:14:49 np0005485008 dnf[30654]: repo-setup-centos-powertools                    205 kB/s | 4.3 kB     00:00
Oct 13 11:14:50 np0005485008 dnf[30654]: Extra Packages for Enterprise Linux 9 - x86_64  131 kB/s |  33 kB     00:00
Oct 13 11:14:50 np0005485008 dnf[30654]: Metadata cache created.
Oct 13 11:14:50 np0005485008 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 13 11:14:50 np0005485008 systemd[1]: Finished dnf makecache.
Oct 13 11:14:50 np0005485008 systemd[1]: dnf-makecache.service: Consumed 1.753s CPU time.
Oct 13 11:15:50 np0005485008 kernel: SELinux:  Converting 2714 SID table entries...
Oct 13 11:15:50 np0005485008 kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 11:15:50 np0005485008 kernel: SELinux:  policy capability open_perms=1
Oct 13 11:15:50 np0005485008 kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 11:15:50 np0005485008 kernel: SELinux:  policy capability always_check_network=0
Oct 13 11:15:50 np0005485008 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 11:15:50 np0005485008 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 11:15:50 np0005485008 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 11:15:51 np0005485008 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct 13 11:15:51 np0005485008 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 11:15:51 np0005485008 systemd[1]: Starting man-db-cache-update.service...
Oct 13 11:15:51 np0005485008 systemd[1]: Reloading.
Oct 13 11:15:51 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:15:51 np0005485008 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 13 11:15:51 np0005485008 systemd[1]: Starting PackageKit Daemon...
Oct 13 11:15:51 np0005485008 systemd[1]: Started PackageKit Daemon.
Oct 13 11:15:52 np0005485008 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 13 11:15:52 np0005485008 systemd[1]: Finished man-db-cache-update.service.
Oct 13 11:15:52 np0005485008 systemd[1]: man-db-cache-update.service: Consumed 1.153s CPU time.
Oct 13 11:15:52 np0005485008 systemd[1]: run-recb0a626fd5e4c6eaf6e3f2c2d4835b9.service: Deactivated successfully.
Oct 13 11:15:52 np0005485008 python3.9[31925]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:15:55 np0005485008 python3.9[32206]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 13 11:15:56 np0005485008 python3.9[32358]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 13 11:15:58 np0005485008 python3.9[32511]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:16:02 np0005485008 python3.9[32663]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 13 11:16:05 np0005485008 python3.9[32815]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:16:06 np0005485008 python3.9[32967]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:16:06 np0005485008 python3.9[33090]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760368565.6441061-439-265942101312251/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1d44c15bf0510ac267971146664e62b6bd7b43c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:16:08 np0005485008 python3.9[33242]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 13 11:16:09 np0005485008 python3.9[33395]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 13 11:16:09 np0005485008 rsyslogd[1000]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 11:16:09 np0005485008 rsyslogd[1000]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 11:16:10 np0005485008 python3.9[33554]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 13 11:16:11 np0005485008 python3.9[33714]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 13 11:16:12 np0005485008 python3.9[33867]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 13 11:16:13 np0005485008 python3.9[34025]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 13 11:16:14 np0005485008 python3.9[34177]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 11:16:16 np0005485008 python3.9[34330]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:16:17 np0005485008 python3.9[34482]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:16:18 np0005485008 python3.9[34605]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760368577.092109-629-259584376182737/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:16:19 np0005485008 python3.9[34757]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 11:16:19 np0005485008 systemd[1]: Starting Load Kernel Modules...
Oct 13 11:16:19 np0005485008 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct 13 11:16:19 np0005485008 kernel: Bridge firewalling registered
Oct 13 11:16:19 np0005485008 systemd-modules-load[34761]: Inserted module 'br_netfilter'
Oct 13 11:16:19 np0005485008 systemd[1]: Finished Load Kernel Modules.
Oct 13 11:16:20 np0005485008 python3.9[34916]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:16:21 np0005485008 python3.9[35039]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760368580.104466-675-81692539399095/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:16:22 np0005485008 python3.9[35191]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 11:16:25 np0005485008 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Oct 13 11:16:25 np0005485008 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Oct 13 11:16:26 np0005485008 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 11:16:26 np0005485008 systemd[1]: Starting man-db-cache-update.service...
Oct 13 11:16:26 np0005485008 systemd[1]: Reloading.
Oct 13 11:16:26 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:16:26 np0005485008 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 13 11:16:27 np0005485008 python3.9[36734]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:16:28 np0005485008 python3.9[37864]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 13 11:16:29 np0005485008 python3.9[38613]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:16:30 np0005485008 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 13 11:16:30 np0005485008 systemd[1]: Finished man-db-cache-update.service.
Oct 13 11:16:30 np0005485008 systemd[1]: man-db-cache-update.service: Consumed 4.982s CPU time.
Oct 13 11:16:30 np0005485008 systemd[1]: run-rab13c1800b2b4bdfae9a8ed66dc09cbb.service: Deactivated successfully.
Oct 13 11:16:30 np0005485008 python3.9[39363]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:16:30 np0005485008 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 13 11:16:31 np0005485008 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 13 11:16:32 np0005485008 python3.9[39736]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:16:32 np0005485008 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 13 11:16:32 np0005485008 systemd[1]: tuned.service: Deactivated successfully.
Oct 13 11:16:32 np0005485008 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 13 11:16:32 np0005485008 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 13 11:16:32 np0005485008 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 13 11:16:33 np0005485008 python3.9[39898]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 13 11:16:35 np0005485008 python3.9[40050]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:16:36 np0005485008 systemd[1]: Reloading.
Oct 13 11:16:36 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:16:37 np0005485008 python3.9[40238]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:16:37 np0005485008 systemd[1]: Reloading.
Oct 13 11:16:37 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:16:38 np0005485008 python3.9[40427]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:16:38 np0005485008 python3.9[40580]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:16:38 np0005485008 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct 13 11:16:39 np0005485008 python3.9[40733]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:16:41 np0005485008 python3.9[40895]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:16:42 np0005485008 python3.9[41048]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 11:16:42 np0005485008 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 13 11:16:42 np0005485008 systemd[1]: Stopped Apply Kernel Variables.
Oct 13 11:16:42 np0005485008 systemd[1]: Stopping Apply Kernel Variables...
Oct 13 11:16:42 np0005485008 systemd[1]: Starting Apply Kernel Variables...
Oct 13 11:16:42 np0005485008 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 13 11:16:42 np0005485008 systemd[1]: Finished Apply Kernel Variables.
Oct 13 11:16:43 np0005485008 systemd[1]: session-11.scope: Deactivated successfully.
Oct 13 11:16:43 np0005485008 systemd[1]: session-11.scope: Consumed 2min 14.721s CPU time.
Oct 13 11:16:43 np0005485008 systemd-logind[784]: Session 11 logged out. Waiting for processes to exit.
Oct 13 11:16:43 np0005485008 systemd-logind[784]: Removed session 11.
Oct 13 11:16:48 np0005485008 systemd-logind[784]: New session 12 of user zuul.
Oct 13 11:16:48 np0005485008 systemd[1]: Started Session 12 of User zuul.
Oct 13 11:16:49 np0005485008 python3.9[41231]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:16:51 np0005485008 python3.9[41385]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:16:52 np0005485008 python3.9[41541]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:16:53 np0005485008 python3.9[41692]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:16:54 np0005485008 python3.9[41848]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 11:16:55 np0005485008 python3.9[41932]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 11:16:58 np0005485008 python3.9[42085]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 11:16:59 np0005485008 python3.9[42256]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:17:00 np0005485008 python3.9[42408]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:17:00 np0005485008 systemd[1]: var-lib-containers-storage-overlay-compat222087566-merged.mount: Deactivated successfully.
Oct 13 11:17:00 np0005485008 podman[42409]: 2025-10-13 15:17:00.291000302 +0000 UTC m=+0.055226391 system refresh
Oct 13 11:17:01 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:17:01 np0005485008 python3.9[42571]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:17:02 np0005485008 python3.9[42694]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760368620.7324302-204-276548042176862/.source.json follow=False _original_basename=podman_network_config.j2 checksum=23bfe759dfff50d012b1a335f2be23573f0d70b6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:17:03 np0005485008 python3.9[42846]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:17:03 np0005485008 python3.9[42969]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760368622.351663-234-23445933185677/.source.conf follow=False _original_basename=registries.conf.j2 checksum=084ac298e40a7f81215c3c34c5dce3c28315d7c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:17:04 np0005485008 python3.9[43121]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:17:05 np0005485008 python3.9[43273]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:17:05 np0005485008 python3.9[43425]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:17:06 np0005485008 python3.9[43577]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:17:07 np0005485008 python3.9[43727]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:17:08 np0005485008 python3.9[43881]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 13 11:17:10 np0005485008 python3.9[44034]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 13 11:17:13 np0005485008 python3.9[44194]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 13 11:17:15 np0005485008 python3.9[44347]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 13 11:17:17 np0005485008 python3.9[44500]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 13 11:17:19 np0005485008 irqbalance[779]: Cannot change IRQ 27 affinity: Operation not permitted
Oct 13 11:17:19 np0005485008 irqbalance[779]: IRQ 27 affinity is now unmanaged
Oct 13 11:17:20 np0005485008 python3.9[44656]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 13 11:17:24 np0005485008 python3.9[44824]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 13 11:17:26 np0005485008 python3.9[44977]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 13 11:17:40 np0005485008 python3.9[45314]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:17:41 np0005485008 python3.9[45489]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:17:41 np0005485008 python3.9[45612]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1760368660.6308312-512-257867712927990/.source.json _original_basename=.xuoui0yf follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:17:42 np0005485008 python3.9[45764]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Oct 13 11:17:42 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:17:45 np0005485008 systemd[1]: var-lib-containers-storage-overlay-compat3140491464-lower\x2dmapped.mount: Deactivated successfully.
Oct 13 11:17:49 np0005485008 podman[45776]: 2025-10-13 15:17:49.209921537 +0000 UTC m=+6.204860757 image pull 7acf5363984cc8f102650810da36ae6f915a365c30bf42518548c6b195c5c57c quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 13 11:17:49 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:17:49 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:17:49 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:17:50 np0005485008 python3.9[46075]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Oct 13 11:17:50 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:17:53 np0005485008 podman[46088]: 2025-10-13 15:17:53.117669858 +0000 UTC m=+2.364173697 image pull 96ad696e7914500f1daa441ab9a026a0f524ff8aa3b224853f7d517ebfe3b2e5 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 13 11:17:53 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:17:53 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:17:53 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:17:54 np0005485008 python3.9[46343]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Oct 13 11:17:54 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:18:02 np0005485008 podman[46354]: 2025-10-13 15:18:02.198756245 +0000 UTC m=+7.824233854 image pull f5d8e43d5fd113645e71c7e8980543c233298a7561a4c95ab3af27cb119e70b7 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 13 11:18:02 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:18:02 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:18:02 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:18:03 np0005485008 python3.9[46632]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Oct 13 11:18:03 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:18:04 np0005485008 podman[46645]: 2025-10-13 15:18:04.355399191 +0000 UTC m=+1.034470999 image pull 7042d0e4c063a84abce3ee29396c85a102ad504e82c1a0963682094dbdd1cf87 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 13 11:18:04 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:18:04 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:18:04 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:18:05 np0005485008 python3.9[46881]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Oct 13 11:18:05 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:18:15 np0005485008 podman[46893]: 2025-10-13 15:18:15.452045107 +0000 UTC m=+9.930629517 image pull 97abb4e5d6eb812c6abde306e15dbdde9dbba5ef5cd42ad11b83abc055914569 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 13 11:18:15 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:18:15 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:18:15 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:18:16 np0005485008 python3.9[47154]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Oct 13 11:18:16 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:18:20 np0005485008 podman[47166]: 2025-10-13 15:18:20.319038151 +0000 UTC m=+3.656470251 image pull b3025f7abe3491529c72028f16e6d504d643486c2fd9b7260f2637e72503a919 quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified
Oct 13 11:18:20 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:18:20 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:18:20 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:18:21 np0005485008 python3.9[47423]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Oct 13 11:18:22 np0005485008 podman[47436]: 2025-10-13 15:18:22.453625445 +0000 UTC m=+1.210544564 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Oct 13 11:18:22 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:18:22 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:18:22 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:18:23 np0005485008 systemd-logind[784]: Session 12 logged out. Waiting for processes to exit.
Oct 13 11:18:23 np0005485008 systemd[1]: session-12.scope: Deactivated successfully.
Oct 13 11:18:23 np0005485008 systemd[1]: session-12.scope: Consumed 1min 51.351s CPU time.
Oct 13 11:18:23 np0005485008 systemd-logind[784]: Removed session 12.
Oct 13 11:18:28 np0005485008 systemd-logind[784]: New session 13 of user zuul.
Oct 13 11:18:28 np0005485008 systemd[1]: Started Session 13 of User zuul.
Oct 13 11:18:30 np0005485008 python3.9[47740]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:18:31 np0005485008 python3.9[47896]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 13 11:18:32 np0005485008 python3.9[48049]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 13 11:18:33 np0005485008 python3.9[48207]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 13 11:18:34 np0005485008 python3.9[48367]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 11:18:35 np0005485008 python3.9[48451]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 13 11:18:38 np0005485008 python3.9[48612]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 11:18:50 np0005485008 kernel: SELinux:  Converting 2725 SID table entries...
Oct 13 11:18:50 np0005485008 kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 11:18:50 np0005485008 kernel: SELinux:  policy capability open_perms=1
Oct 13 11:18:50 np0005485008 kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 11:18:50 np0005485008 kernel: SELinux:  policy capability always_check_network=0
Oct 13 11:18:50 np0005485008 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 11:18:50 np0005485008 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 11:18:50 np0005485008 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 11:18:51 np0005485008 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct 13 11:18:51 np0005485008 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct 13 11:18:52 np0005485008 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 11:18:52 np0005485008 systemd[1]: Starting man-db-cache-update.service...
Oct 13 11:18:52 np0005485008 systemd[1]: Reloading.
Oct 13 11:18:52 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:18:52 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:18:52 np0005485008 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 13 11:18:53 np0005485008 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 13 11:18:53 np0005485008 systemd[1]: Finished man-db-cache-update.service.
Oct 13 11:18:53 np0005485008 systemd[1]: run-rcf4ef86d8ee041ae843052c7bde5b590.service: Deactivated successfully.
Oct 13 11:18:54 np0005485008 python3.9[49715]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 13 11:18:54 np0005485008 systemd[1]: Reloading.
Oct 13 11:18:54 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:18:54 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:18:54 np0005485008 systemd[1]: Starting Open vSwitch Database Unit...
Oct 13 11:18:54 np0005485008 chown[49756]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct 13 11:18:54 np0005485008 ovs-ctl[49761]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct 13 11:18:54 np0005485008 ovs-ctl[49761]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct 13 11:18:54 np0005485008 ovs-ctl[49761]: Starting ovsdb-server [  OK  ]
Oct 13 11:18:54 np0005485008 ovs-vsctl[49810]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct 13 11:18:55 np0005485008 ovs-vsctl[49830]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"e8c98390-b106-43ff-9736-5afcb5548264\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct 13 11:18:55 np0005485008 ovs-ctl[49761]: Configuring Open vSwitch system IDs [  OK  ]
Oct 13 11:18:55 np0005485008 ovs-ctl[49761]: Enabling remote OVSDB managers [  OK  ]
Oct 13 11:18:55 np0005485008 systemd[1]: Started Open vSwitch Database Unit.
Oct 13 11:18:55 np0005485008 ovs-vsctl[49836]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Oct 13 11:18:55 np0005485008 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct 13 11:18:55 np0005485008 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct 13 11:18:55 np0005485008 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct 13 11:18:55 np0005485008 kernel: openvswitch: Open vSwitch switching datapath
Oct 13 11:18:55 np0005485008 ovs-ctl[49880]: Inserting openvswitch module [  OK  ]
Oct 13 11:18:55 np0005485008 ovs-ctl[49849]: Starting ovs-vswitchd [  OK  ]
Oct 13 11:18:55 np0005485008 ovs-vsctl[49897]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Oct 13 11:18:55 np0005485008 ovs-ctl[49849]: Enabling remote OVSDB managers [  OK  ]
Oct 13 11:18:55 np0005485008 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct 13 11:18:55 np0005485008 systemd[1]: Starting Open vSwitch...
Oct 13 11:18:55 np0005485008 systemd[1]: Finished Open vSwitch.
Oct 13 11:18:56 np0005485008 python3.9[50049]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:18:57 np0005485008 python3.9[50201]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 13 11:18:58 np0005485008 kernel: SELinux:  Converting 2739 SID table entries...
Oct 13 11:18:58 np0005485008 kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 11:18:58 np0005485008 kernel: SELinux:  policy capability open_perms=1
Oct 13 11:18:58 np0005485008 kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 11:18:58 np0005485008 kernel: SELinux:  policy capability always_check_network=0
Oct 13 11:18:58 np0005485008 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 11:18:58 np0005485008 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 11:18:58 np0005485008 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 11:18:59 np0005485008 python3.9[50356]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:19:00 np0005485008 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct 13 11:19:01 np0005485008 python3.9[50514]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 11:19:03 np0005485008 python3.9[50667]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:19:04 np0005485008 python3.9[50954]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 11:19:05 np0005485008 python3.9[51104]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:19:06 np0005485008 python3.9[51258]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 11:19:08 np0005485008 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 11:19:08 np0005485008 systemd[1]: Starting man-db-cache-update.service...
Oct 13 11:19:08 np0005485008 systemd[1]: Reloading.
Oct 13 11:19:09 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:19:09 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:19:09 np0005485008 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 13 11:19:09 np0005485008 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 13 11:19:09 np0005485008 systemd[1]: Finished man-db-cache-update.service.
Oct 13 11:19:09 np0005485008 systemd[1]: run-r385f465aa9e1417b92b752944543198c.service: Deactivated successfully.
Oct 13 11:19:10 np0005485008 python3.9[51576]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 11:19:10 np0005485008 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 13 11:19:10 np0005485008 systemd[1]: Stopped Network Manager Wait Online.
Oct 13 11:19:10 np0005485008 systemd[1]: Stopping Network Manager Wait Online...
Oct 13 11:19:10 np0005485008 systemd[1]: Stopping Network Manager...
Oct 13 11:19:10 np0005485008 NetworkManager[3948]: <info>  [1760368750.4045] caught SIGTERM, shutting down normally.
Oct 13 11:19:10 np0005485008 NetworkManager[3948]: <info>  [1760368750.4069] dhcp4 (eth0): canceled DHCP transaction
Oct 13 11:19:10 np0005485008 NetworkManager[3948]: <info>  [1760368750.4069] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 13 11:19:10 np0005485008 NetworkManager[3948]: <info>  [1760368750.4069] dhcp4 (eth0): state changed no lease
Oct 13 11:19:10 np0005485008 NetworkManager[3948]: <info>  [1760368750.4075] manager: NetworkManager state is now CONNECTED_SITE
Oct 13 11:19:10 np0005485008 NetworkManager[3948]: <info>  [1760368750.4142] exiting (success)
Oct 13 11:19:10 np0005485008 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 13 11:19:10 np0005485008 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 13 11:19:10 np0005485008 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 13 11:19:10 np0005485008 systemd[1]: Stopped Network Manager.
Oct 13 11:19:10 np0005485008 systemd[1]: NetworkManager.service: Consumed 12.205s CPU time, 4.1M memory peak, read 0B from disk, written 38.5K to disk.
Oct 13 11:19:10 np0005485008 systemd[1]: Starting Network Manager...
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.4722] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:19574273-8afa-412a-830f-e7555ab9fd7b)
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.4726] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.4795] manager[0x55600cce4090]: monitoring kernel firmware directory '/lib/firmware'.
Oct 13 11:19:10 np0005485008 systemd[1]: Starting Hostname Service...
Oct 13 11:19:10 np0005485008 systemd[1]: Started Hostname Service.
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5601] hostname: hostname: using hostnamed
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5601] hostname: static hostname changed from (none) to "compute-1"
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5608] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5614] manager[0x55600cce4090]: rfkill: Wi-Fi hardware radio set enabled
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5614] manager[0x55600cce4090]: rfkill: WWAN hardware radio set enabled
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5636] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5645] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5646] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5646] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5647] manager: Networking is enabled by state file
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5650] settings: Loaded settings plugin: keyfile (internal)
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5653] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5674] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5686] dhcp: init: Using DHCP client 'internal'
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5688] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5694] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5699] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5706] device (lo): Activation: starting connection 'lo' (05f88e04-4bde-42e0-981d-e26944b28dbb)
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5712] device (eth0): carrier: link connected
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5716] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5719] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5719] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5723] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5729] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5735] device (eth1): carrier: link connected
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5739] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5743] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (9c569e06-dba0-5a1c-999c-691ec6c75ed2) (indicated)
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5743] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5747] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5755] device (eth1): Activation: starting connection 'ci-private-network' (9c569e06-dba0-5a1c-999c-691ec6c75ed2)
Oct 13 11:19:10 np0005485008 systemd[1]: Started Network Manager.
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5762] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5769] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5773] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5774] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5777] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5779] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5781] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5783] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5789] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5796] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5798] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5816] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.5832] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.6217] dhcp4 (eth0): state changed new lease, address=38.102.83.17
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.6227] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 13 11:19:10 np0005485008 systemd[1]: Starting Network Manager Wait Online...
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.6286] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.6292] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.6299] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.6308] device (lo): Activation: successful, device activated.
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.6315] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.6318] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.6321] manager: NetworkManager state is now CONNECTED_LOCAL
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.6324] device (eth1): Activation: successful, device activated.
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.6333] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.6335] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.6339] manager: NetworkManager state is now CONNECTED_SITE
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.6343] device (eth0): Activation: successful, device activated.
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.6348] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 13 11:19:10 np0005485008 NetworkManager[51587]: <info>  [1760368750.6350] manager: startup complete
Oct 13 11:19:10 np0005485008 systemd[1]: Finished Network Manager Wait Online.
Oct 13 11:19:11 np0005485008 python3.9[51802]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 11:19:16 np0005485008 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 11:19:16 np0005485008 systemd[1]: Starting man-db-cache-update.service...
Oct 13 11:19:16 np0005485008 systemd[1]: Reloading.
Oct 13 11:19:16 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:19:16 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:19:17 np0005485008 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 13 11:19:17 np0005485008 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 13 11:19:17 np0005485008 systemd[1]: Finished man-db-cache-update.service.
Oct 13 11:19:17 np0005485008 systemd[1]: run-rc4a9ee7e50b0455dbe6d51ec90511c65.service: Deactivated successfully.
Oct 13 11:19:18 np0005485008 python3.9[52263]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:19:19 np0005485008 python3.9[52415]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:19:20 np0005485008 python3.9[52569]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:19:20 np0005485008 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 13 11:19:21 np0005485008 python3.9[52721]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:19:22 np0005485008 python3.9[52873]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:19:23 np0005485008 python3.9[53025]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:19:24 np0005485008 python3.9[53177]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:19:24 np0005485008 python3.9[53300]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760368763.5682218-444-96613471474259/.source _original_basename=.rcdyb020 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:19:25 np0005485008 python3.9[53452]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:19:26 np0005485008 python3.9[53604]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct 13 11:19:27 np0005485008 python3.9[53756]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:19:29 np0005485008 python3.9[54183]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct 13 11:19:31 np0005485008 ansible-async_wrapper.py[54358]: Invoked with j421226924379 300 /home/zuul/.ansible/tmp/ansible-tmp-1760368770.304574-576-93280125672044/AnsiballZ_edpm_os_net_config.py _
Oct 13 11:19:31 np0005485008 ansible-async_wrapper.py[54361]: Starting module and watcher
Oct 13 11:19:31 np0005485008 ansible-async_wrapper.py[54361]: Start watching 54362 (300)
Oct 13 11:19:31 np0005485008 ansible-async_wrapper.py[54362]: Start module (54362)
Oct 13 11:19:31 np0005485008 ansible-async_wrapper.py[54358]: Return async_wrapper task started.
Oct 13 11:19:31 np0005485008 python3.9[54363]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct 13 11:19:32 np0005485008 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct 13 11:19:32 np0005485008 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct 13 11:19:32 np0005485008 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct 13 11:19:32 np0005485008 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct 13 11:19:32 np0005485008 kernel: cfg80211: failed to load regulatory.db
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.2493] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=54364 uid=0 result="success"
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.2513] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=54364 uid=0 result="success"
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3009] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3010] audit: op="connection-add" uuid="2a1abf1c-6384-4f7f-be0b-6b6118e8b58c" name="br-ex-br" pid=54364 uid=0 result="success"
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3025] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3026] audit: op="connection-add" uuid="104465c2-d845-4b34-8b02-0d1ad7fa957d" name="br-ex-port" pid=54364 uid=0 result="success"
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3047] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3048] audit: op="connection-add" uuid="f80794f6-bf0e-46a4-a14e-c57ba115aa81" name="eth1-port" pid=54364 uid=0 result="success"
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3062] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3064] audit: op="connection-add" uuid="eef7c7f9-4366-4b6b-bd15-009b577f84b6" name="vlan20-port" pid=54364 uid=0 result="success"
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3079] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3081] audit: op="connection-add" uuid="5ae4e9cf-355d-44b0-80b4-fd00465a528d" name="vlan21-port" pid=54364 uid=0 result="success"
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3096] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3098] audit: op="connection-add" uuid="ef168825-a96c-4c08-8621-489aa8e4bf0a" name="vlan22-port" pid=54364 uid=0 result="success"
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3124] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout,802-3-ethernet.mtu,connection.timestamp,connection.autoconnect-priority" pid=54364 uid=0 result="success"
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3141] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3143] audit: op="connection-add" uuid="9ddc9da4-9142-437d-8fb3-54527ddaa2df" name="br-ex-if" pid=54364 uid=0 result="success"
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3197] audit: op="connection-update" uuid="9c569e06-dba0-5a1c-999c-691ec6c75ed2" name="ci-private-network" args="ipv4.never-default,ipv4.addresses,ipv4.method,ipv4.routing-rules,ipv4.dns,ipv4.routes,ipv6.routing-rules,ipv6.addresses,ipv6.addr-gen-mode,ipv6.method,ipv6.dns,ipv6.routes,ovs-interface.type,connection.timestamp,connection.controller,connection.port-type,connection.master,connection.slave-type,ovs-external-ids.data" pid=54364 uid=0 result="success"
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3222] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3224] audit: op="connection-add" uuid="64d9d487-5c1f-4b08-a972-bbae17b9dec8" name="vlan20-if" pid=54364 uid=0 result="success"
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3247] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3249] audit: op="connection-add" uuid="8aeacb53-ea8d-4a30-8226-de984f3e5183" name="vlan21-if" pid=54364 uid=0 result="success"
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3268] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3269] audit: op="connection-add" uuid="53764bcc-f6b8-4c0b-96f0-660d03bd3522" name="vlan22-if" pid=54364 uid=0 result="success"
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3282] audit: op="connection-delete" uuid="9acb260b-8b23-3cab-89de-8291129a90a1" name="Wired connection 1" pid=54364 uid=0 result="success"
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3294] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3308] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3311] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (2a1abf1c-6384-4f7f-be0b-6b6118e8b58c)
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3312] audit: op="connection-activate" uuid="2a1abf1c-6384-4f7f-be0b-6b6118e8b58c" name="br-ex-br" pid=54364 uid=0 result="success"
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3314] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3320] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3324] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (104465c2-d845-4b34-8b02-0d1ad7fa957d)
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3326] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3331] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3335] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (f80794f6-bf0e-46a4-a14e-c57ba115aa81)
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3336] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3345] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3349] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (eef7c7f9-4366-4b6b-bd15-009b577f84b6)
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3351] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3357] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3362] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (5ae4e9cf-355d-44b0-80b4-fd00465a528d)
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3364] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3370] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3375] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (ef168825-a96c-4c08-8621-489aa8e4bf0a)
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3376] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3378] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3380] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3387] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3391] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3395] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (9ddc9da4-9142-437d-8fb3-54527ddaa2df)
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3396] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3399] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3400] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3402] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3403] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3416] device (eth1): disconnecting for new activation request.
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3417] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3420] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3422] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3423] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3426] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3430] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3435] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (64d9d487-5c1f-4b08-a972-bbae17b9dec8)
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3435] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3439] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3441] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3442] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3444] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3449] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3453] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (8aeacb53-ea8d-4a30-8226-de984f3e5183)
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3454] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3457] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3459] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3460] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3463] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3468] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3472] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (53764bcc-f6b8-4c0b-96f0-660d03bd3522)
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3473] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3476] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3477] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3478] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3479] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3491] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority" pid=54364 uid=0 result="success"
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3493] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3496] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3497] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3502] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3505] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3509] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3511] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3512] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3516] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3518] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3521] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3522] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3526] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3529] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3531] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 kernel: ovs-system: entered promiscuous mode
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3532] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3535] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3539] dhcp4 (eth0): canceled DHCP transaction
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3539] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3539] dhcp4 (eth0): state changed no lease
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3540] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3552] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3554] audit: op="device-reapply" interface="eth1" ifindex=3 pid=54364 uid=0 result="fail" reason="Device is not activated"
Oct 13 11:19:33 np0005485008 systemd-udevd[54368]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:19:33 np0005485008 kernel: Timeout policy base is empty
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3562] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3626] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3629] dhcp4 (eth0): state changed new lease, address=38.102.83.17
Oct 13 11:19:33 np0005485008 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3667] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3678] device (eth1): disconnecting for new activation request.
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3679] audit: op="connection-activate" uuid="9c569e06-dba0-5a1c-999c-691ec6c75ed2" name="ci-private-network" pid=54364 uid=0 result="success"
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3709] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=54364 uid=0 result="success"
Oct 13 11:19:33 np0005485008 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3758] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 13 11:19:33 np0005485008 kernel: br-ex: entered promiscuous mode
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3972] device (eth1): Activation: starting connection 'ci-private-network' (9c569e06-dba0-5a1c-999c-691ec6c75ed2)
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3978] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3989] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.3994] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4008] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4013] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4025] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4028] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4030] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4032] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4034] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 kernel: vlan22: entered promiscuous mode
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4058] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 systemd-udevd[54369]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4087] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4091] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4096] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4101] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4105] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4111] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4117] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4123] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct 13 11:19:33 np0005485008 kernel: vlan20: entered promiscuous mode
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4128] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4131] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4145] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4151] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4174] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4186] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4196] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4204] device (eth1): Activation: successful, device activated.
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4212] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 13 11:19:33 np0005485008 kernel: vlan21: entered promiscuous mode
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4231] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4266] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4269] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4285] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4294] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4300] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4305] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4310] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4321] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4328] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4383] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4387] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4396] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4405] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4427] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4462] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4464] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 13 11:19:33 np0005485008 NetworkManager[51587]: <info>  [1760368773.4470] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 13 11:19:34 np0005485008 NetworkManager[51587]: <info>  [1760368774.5699] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=54364 uid=0 result="success"
Oct 13 11:19:34 np0005485008 NetworkManager[51587]: <info>  [1760368774.7071] checkpoint[0x55600ccba950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct 13 11:19:34 np0005485008 NetworkManager[51587]: <info>  [1760368774.7073] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=54364 uid=0 result="success"
Oct 13 11:19:35 np0005485008 NetworkManager[51587]: <info>  [1760368775.0029] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=54364 uid=0 result="success"
Oct 13 11:19:35 np0005485008 NetworkManager[51587]: <info>  [1760368775.0043] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=54364 uid=0 result="success"
Oct 13 11:19:35 np0005485008 python3.9[54698]: ansible-ansible.legacy.async_status Invoked with jid=j421226924379.54358 mode=status _async_dir=/root/.ansible_async
Oct 13 11:19:35 np0005485008 NetworkManager[51587]: <info>  [1760368775.1925] audit: op="networking-control" arg="global-dns-configuration" pid=54364 uid=0 result="success"
Oct 13 11:19:35 np0005485008 NetworkManager[51587]: <info>  [1760368775.1955] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct 13 11:19:35 np0005485008 NetworkManager[51587]: <info>  [1760368775.1985] audit: op="networking-control" arg="global-dns-configuration" pid=54364 uid=0 result="success"
Oct 13 11:19:35 np0005485008 NetworkManager[51587]: <info>  [1760368775.2015] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=54364 uid=0 result="success"
Oct 13 11:19:35 np0005485008 NetworkManager[51587]: <info>  [1760368775.3469] checkpoint[0x55600ccbaa20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct 13 11:19:35 np0005485008 NetworkManager[51587]: <info>  [1760368775.3475] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=54364 uid=0 result="success"
Oct 13 11:19:35 np0005485008 ansible-async_wrapper.py[54362]: Module complete (54362)
Oct 13 11:19:36 np0005485008 ansible-async_wrapper.py[54361]: Done in kid B.
Oct 13 11:19:38 np0005485008 python3.9[54802]: ansible-ansible.legacy.async_status Invoked with jid=j421226924379.54358 mode=status _async_dir=/root/.ansible_async
Oct 13 11:19:39 np0005485008 python3.9[54902]: ansible-ansible.legacy.async_status Invoked with jid=j421226924379.54358 mode=cleanup _async_dir=/root/.ansible_async
Oct 13 11:19:40 np0005485008 python3.9[55054]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:19:40 np0005485008 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 13 11:19:40 np0005485008 python3.9[55177]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760368779.4579182-630-206858623804181/.source.returncode _original_basename=.3vlpyann follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:19:41 np0005485008 python3.9[55331]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:19:41 np0005485008 python3.9[55454]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760368780.9082131-662-161220418017803/.source.cfg _original_basename=.kh78_5oc follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:19:42 np0005485008 python3.9[55607]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 11:19:42 np0005485008 systemd[1]: Reloading Network Manager...
Oct 13 11:19:42 np0005485008 NetworkManager[51587]: <info>  [1760368782.8681] audit: op="reload" arg="0" pid=55611 uid=0 result="success"
Oct 13 11:19:42 np0005485008 NetworkManager[51587]: <info>  [1760368782.8691] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct 13 11:19:42 np0005485008 systemd[1]: Reloaded Network Manager.
Oct 13 11:19:43 np0005485008 systemd[1]: session-13.scope: Deactivated successfully.
Oct 13 11:19:43 np0005485008 systemd[1]: session-13.scope: Consumed 50.942s CPU time.
Oct 13 11:19:43 np0005485008 systemd-logind[784]: Session 13 logged out. Waiting for processes to exit.
Oct 13 11:19:43 np0005485008 systemd-logind[784]: Removed session 13.
Oct 13 11:19:48 np0005485008 systemd-logind[784]: New session 14 of user zuul.
Oct 13 11:19:48 np0005485008 systemd[1]: Started Session 14 of User zuul.
Oct 13 11:19:49 np0005485008 python3.9[55795]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:19:50 np0005485008 python3.9[55949]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 11:19:52 np0005485008 python3.9[56139]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:19:52 np0005485008 systemd[1]: session-14.scope: Deactivated successfully.
Oct 13 11:19:52 np0005485008 systemd[1]: session-14.scope: Consumed 2.303s CPU time.
Oct 13 11:19:52 np0005485008 systemd-logind[784]: Session 14 logged out. Waiting for processes to exit.
Oct 13 11:19:52 np0005485008 systemd-logind[784]: Removed session 14.
Oct 13 11:19:52 np0005485008 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 13 11:19:58 np0005485008 systemd-logind[784]: New session 15 of user zuul.
Oct 13 11:19:58 np0005485008 systemd[1]: Started Session 15 of User zuul.
Oct 13 11:19:59 np0005485008 python3.9[56322]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:20:00 np0005485008 python3.9[56476]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:20:01 np0005485008 python3.9[56633]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 11:20:02 np0005485008 python3.9[56717]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 11:20:04 np0005485008 python3.9[56871]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 11:20:07 np0005485008 python3.9[57062]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:20:08 np0005485008 python3.9[57214]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:20:08 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:20:09 np0005485008 irqbalance[779]: Cannot change IRQ 26 affinity: Operation not permitted
Oct 13 11:20:09 np0005485008 irqbalance[779]: IRQ 26 affinity is now unmanaged
Oct 13 11:20:09 np0005485008 python3.9[57378]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:20:09 np0005485008 python3.9[57456]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:20:10 np0005485008 python3.9[57608]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:20:11 np0005485008 python3.9[57686]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:20:12 np0005485008 python3.9[57838]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:20:12 np0005485008 python3.9[57990]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:20:13 np0005485008 python3.9[58142]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:20:14 np0005485008 python3.9[58294]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:20:15 np0005485008 python3.9[58446]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 11:20:18 np0005485008 python3.9[58599]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:20:19 np0005485008 python3.9[58753]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:20:19 np0005485008 python3.9[58905]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:20:20 np0005485008 python3.9[59057]: ansible-service_facts Invoked
Oct 13 11:20:20 np0005485008 network[59074]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 13 11:20:20 np0005485008 network[59075]: 'network-scripts' will be removed from distribution in near future.
Oct 13 11:20:20 np0005485008 network[59076]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 13 11:20:26 np0005485008 python3.9[59530]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 11:20:29 np0005485008 python3.9[59683]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 13 11:20:30 np0005485008 python3.9[59835]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:20:31 np0005485008 python3.9[59960]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760368830.437418-426-5205515644221/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:20:32 np0005485008 python3.9[60114]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:20:33 np0005485008 python3.9[60239]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760368832.1811583-457-252163565454591/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:20:34 np0005485008 python3.9[60393]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:20:36 np0005485008 python3.9[60547]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 11:20:37 np0005485008 python3.9[60631]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:20:39 np0005485008 python3.9[60785]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 11:20:40 np0005485008 python3.9[60869]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 11:20:40 np0005485008 chronyd[797]: chronyd exiting
Oct 13 11:20:40 np0005485008 systemd[1]: Stopping NTP client/server...
Oct 13 11:20:40 np0005485008 systemd[1]: chronyd.service: Deactivated successfully.
Oct 13 11:20:40 np0005485008 systemd[1]: Stopped NTP client/server.
Oct 13 11:20:40 np0005485008 systemd[1]: Starting NTP client/server...
Oct 13 11:20:40 np0005485008 chronyd[60878]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 13 11:20:40 np0005485008 chronyd[60878]: Frequency -31.642 +/- 0.336 ppm read from /var/lib/chrony/drift
Oct 13 11:20:40 np0005485008 chronyd[60878]: Loaded seccomp filter (level 2)
Oct 13 11:20:40 np0005485008 systemd[1]: Started NTP client/server.
Oct 13 11:20:41 np0005485008 systemd[1]: session-15.scope: Deactivated successfully.
Oct 13 11:20:41 np0005485008 systemd[1]: session-15.scope: Consumed 25.706s CPU time.
Oct 13 11:20:41 np0005485008 systemd-logind[784]: Session 15 logged out. Waiting for processes to exit.
Oct 13 11:20:41 np0005485008 systemd-logind[784]: Removed session 15.
Oct 13 11:20:46 np0005485008 systemd-logind[784]: New session 16 of user zuul.
Oct 13 11:20:46 np0005485008 systemd[1]: Started Session 16 of User zuul.
Oct 13 11:20:47 np0005485008 python3.9[61057]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:20:49 np0005485008 python3.9[61213]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:20:50 np0005485008 python3.9[61388]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:20:50 np0005485008 python3.9[61466]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.uojeqc5b recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:20:52 np0005485008 python3.9[61618]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:20:52 np0005485008 python3.9[61741]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760368851.2792256-108-221960728726592/.source _original_basename=.p4fe3huy follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:20:53 np0005485008 python3.9[61893]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:20:54 np0005485008 python3.9[62045]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:20:55 np0005485008 python3.9[62168]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760368853.944739-156-142989006328354/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:20:55 np0005485008 python3.9[62320]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:20:56 np0005485008 python3.9[62443]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760368855.3371916-156-262483064395910/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:20:57 np0005485008 python3.9[62595]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:20:58 np0005485008 python3.9[62747]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:20:59 np0005485008 python3.9[62870]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760368857.9022627-230-120677137919971/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:21:00 np0005485008 python3.9[63022]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:21:00 np0005485008 python3.9[63145]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760368859.6972377-260-238976480970435/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:21:02 np0005485008 python3.9[63297]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:21:02 np0005485008 systemd[1]: Reloading.
Oct 13 11:21:02 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:21:02 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:21:02 np0005485008 systemd[1]: Reloading.
Oct 13 11:21:02 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:21:02 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:21:02 np0005485008 systemd[1]: Starting EDPM Container Shutdown...
Oct 13 11:21:02 np0005485008 systemd[1]: Finished EDPM Container Shutdown.
Oct 13 11:21:03 np0005485008 python3.9[63525]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:21:04 np0005485008 python3.9[63648]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760368863.0373607-306-123964915939475/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:21:05 np0005485008 python3.9[63800]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:21:05 np0005485008 python3.9[63923]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760368864.557879-337-228629095546708/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:21:06 np0005485008 python3.9[64075]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:21:06 np0005485008 systemd[1]: Reloading.
Oct 13 11:21:06 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:21:06 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:21:06 np0005485008 systemd[1]: Reloading.
Oct 13 11:21:07 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:21:07 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:21:07 np0005485008 systemd[1]: Starting Create netns directory...
Oct 13 11:21:07 np0005485008 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 13 11:21:07 np0005485008 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 13 11:21:07 np0005485008 systemd[1]: Finished Create netns directory.
Oct 13 11:21:08 np0005485008 python3.9[64302]: ansible-ansible.builtin.service_facts Invoked
Oct 13 11:21:08 np0005485008 network[64319]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 13 11:21:08 np0005485008 network[64320]: 'network-scripts' will be removed from distribution in near future.
Oct 13 11:21:08 np0005485008 network[64321]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 13 11:21:15 np0005485008 python3.9[64585]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:21:15 np0005485008 systemd[1]: Reloading.
Oct 13 11:21:15 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:21:15 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:21:15 np0005485008 systemd[1]: Stopping IPv4 firewall with iptables...
Oct 13 11:21:15 np0005485008 iptables.init[64626]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct 13 11:21:15 np0005485008 iptables.init[64626]: iptables: Flushing firewall rules: [  OK  ]
Oct 13 11:21:15 np0005485008 systemd[1]: iptables.service: Deactivated successfully.
Oct 13 11:21:15 np0005485008 systemd[1]: Stopped IPv4 firewall with iptables.
Oct 13 11:21:16 np0005485008 python3.9[64822]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:21:18 np0005485008 python3.9[64976]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:21:18 np0005485008 systemd[1]: Reloading.
Oct 13 11:21:18 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:21:18 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:21:18 np0005485008 systemd[1]: Starting Netfilter Tables...
Oct 13 11:21:18 np0005485008 systemd[1]: Finished Netfilter Tables.
Oct 13 11:21:20 np0005485008 python3.9[65168]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:21:21 np0005485008 python3.9[65321]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:21:22 np0005485008 python3.9[65446]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760368880.9782493-474-196588779663942/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:21:23 np0005485008 python3.9[65597]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 11:21:48 np0005485008 systemd[1]: session-16.scope: Deactivated successfully.
Oct 13 11:21:48 np0005485008 systemd[1]: session-16.scope: Consumed 19.267s CPU time.
Oct 13 11:21:48 np0005485008 systemd-logind[784]: Session 16 logged out. Waiting for processes to exit.
Oct 13 11:21:48 np0005485008 systemd-logind[784]: Removed session 16.
Oct 13 11:22:00 np0005485008 systemd-logind[784]: New session 17 of user zuul.
Oct 13 11:22:00 np0005485008 systemd[1]: Started Session 17 of User zuul.
Oct 13 11:22:01 np0005485008 python3.9[65791]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:22:03 np0005485008 python3.9[65947]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:22:03 np0005485008 python3.9[66122]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:22:04 np0005485008 python3.9[66200]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.f193r4p5 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:22:05 np0005485008 python3.9[66352]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:22:05 np0005485008 python3.9[66430]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.4_jzczs5 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:22:06 np0005485008 python3.9[66583]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:22:07 np0005485008 python3.9[66735]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:22:08 np0005485008 python3.9[66813]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:22:08 np0005485008 python3.9[66965]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:22:09 np0005485008 python3.9[67043]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:22:10 np0005485008 python3.9[67195]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:22:10 np0005485008 python3.9[67347]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:22:11 np0005485008 python3.9[67425]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:22:12 np0005485008 python3.9[67577]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:22:12 np0005485008 python3.9[67655]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:22:14 np0005485008 python3.9[67807]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:22:14 np0005485008 systemd[1]: Reloading.
Oct 13 11:22:14 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:22:14 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:22:15 np0005485008 python3.9[67998]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:22:15 np0005485008 python3.9[68076]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:22:16 np0005485008 python3.9[68228]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:22:17 np0005485008 python3.9[68306]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:22:18 np0005485008 python3.9[68458]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:22:18 np0005485008 systemd[1]: Reloading.
Oct 13 11:22:18 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:22:18 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:22:18 np0005485008 systemd[1]: Starting Create netns directory...
Oct 13 11:22:18 np0005485008 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 13 11:22:18 np0005485008 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 13 11:22:18 np0005485008 systemd[1]: Finished Create netns directory.
Oct 13 11:22:19 np0005485008 python3.9[68650]: ansible-ansible.builtin.service_facts Invoked
Oct 13 11:22:19 np0005485008 network[68667]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 13 11:22:19 np0005485008 network[68668]: 'network-scripts' will be removed from distribution in near future.
Oct 13 11:22:19 np0005485008 network[68669]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 13 11:22:24 np0005485008 python3.9[68932]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:22:24 np0005485008 python3.9[69010]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:22:25 np0005485008 python3.9[69162]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:22:26 np0005485008 python3.9[69314]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:22:27 np0005485008 python3.9[69437]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760368946.1480339-417-16192523321019/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:22:28 np0005485008 python3.9[69589]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 13 11:22:28 np0005485008 systemd[1]: Starting Time & Date Service...
Oct 13 11:22:28 np0005485008 systemd[1]: Started Time & Date Service.
Oct 13 11:22:29 np0005485008 python3.9[69745]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:22:30 np0005485008 python3.9[69897]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:22:30 np0005485008 python3.9[70020]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760368949.8993073-487-211630766746716/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:22:31 np0005485008 python3.9[70172]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:22:32 np0005485008 python3.9[70295]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760368951.3462784-517-225255026986551/.source.yaml _original_basename=.9ndu1h5n follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:22:33 np0005485008 python3.9[70447]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:22:34 np0005485008 python3.9[70570]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760368952.980693-547-120754544628501/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:22:34 np0005485008 python3.9[70722]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:22:35 np0005485008 python3.9[70875]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:22:36 np0005485008 python3[71028]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 13 11:22:37 np0005485008 python3.9[71180]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:22:38 np0005485008 python3.9[71303]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760368957.0818331-625-81630691700748/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:22:39 np0005485008 python3.9[71455]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:22:39 np0005485008 python3.9[71578]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760368958.4860647-655-70725295801923/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:22:40 np0005485008 python3.9[71730]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:22:41 np0005485008 python3.9[71853]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760368960.039745-686-53420359745649/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:22:41 np0005485008 python3.9[72005]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:22:42 np0005485008 python3.9[72128]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760368961.4093163-715-144654704961182/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:22:43 np0005485008 python3.9[72280]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:22:43 np0005485008 python3.9[72403]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760368962.6734538-745-87308810416835/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:22:44 np0005485008 python3.9[72555]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:22:45 np0005485008 python3.9[72707]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:22:46 np0005485008 python3.9[72866]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:22:47 np0005485008 python3.9[73019]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:22:47 np0005485008 python3.9[73171]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:22:49 np0005485008 python3.9[73323]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 13 11:22:49 np0005485008 rsyslogd[1000]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 11:22:49 np0005485008 rsyslogd[1000]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 11:22:49 np0005485008 python3.9[73477]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 13 11:22:49 np0005485008 chronyd[60878]: Selected source 162.159.200.123 (pool.ntp.org)
Oct 13 11:22:50 np0005485008 systemd[1]: session-17.scope: Deactivated successfully.
Oct 13 11:22:50 np0005485008 systemd[1]: session-17.scope: Consumed 31.587s CPU time.
Oct 13 11:22:50 np0005485008 systemd-logind[784]: Session 17 logged out. Waiting for processes to exit.
Oct 13 11:22:50 np0005485008 systemd-logind[784]: Removed session 17.
Oct 13 11:22:55 np0005485008 systemd-logind[784]: New session 18 of user zuul.
Oct 13 11:22:55 np0005485008 systemd[1]: Started Session 18 of User zuul.
Oct 13 11:22:56 np0005485008 python3.9[73658]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 13 11:22:57 np0005485008 python3.9[73810]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:22:58 np0005485008 python3.9[73962]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:22:58 np0005485008 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 13 11:22:59 np0005485008 python3.9[74116]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCd7ghApxtgQdnOCQgexzQhPQO8X6XkhsbwC44Ua6U1nWQK1ppT6tDkzdfgybmDiygGlrRT7VdLJowfgNMD7e6tZY3RcoAHuOL6T1sAmIZ4wOWJ7BHS+c3wx1J84bvqmprp75sQTSM1QDIbsiisufKho+0F1gJsNLu0k9YLDm/yF5lgoGD8M7RAG6hFO++r7q970G8AIPDQofFcuvGVktzFfpaiC51JwJ0Tdg0B7/izZnLF/S4KUCY3feWLmQqjy8X+0ap9wYXCg0qwCAtIX+/B5HDFEWFEACYP/9OxfqbjZbJqzRZC2Y6D1u+WdgHXkP+UeK9zF7mxYerP1GLzehd28araVKT6Cg3x//cOB8ySShqBiDhnkKcbc8DHElAxLlgEfWK8lIc51R4kmrHyjrfuKtLIARm7aN1cSJM+VUnkdItKlSXdueHnvcJqE0fv5U3QDz+iqfVKRgPMYjNjT9EXeNx8ForqtoZ1aY/3uyRvWBMjhHXJEI/+AxLNdPJkh2s=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOUI3ZUYr90zU++6bkv5qY0I7YcWcLdqax/rj5dX/DtV#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCUsNca5JvDZAKyLnZ6Z+p/R3yD03ohpjF3cYWJbFvzdjCP84JWwlsGz3MEpZE4wmo6MqJ12Ls2XLLJE3+vgZyg=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgPwwEvNC1Ui8of3PptTTSUZzoOG6HG8iEi5XB8RJ465qxwsV1qnkx1WgDeT27z3dO0G5ZdrfcXwihxuhCfd55e3I2MuRxxS2oAt6PUT5IZx0nthgwFdcVE52O0KLDevN5PwKKNsIfj9aWTDVDMztK+p40RoXAa3E5RDZjjVIxH/ZrLnUMsKO18DNW9y1nD8injnyZg54uT8i1VuA+2k0pfFegunQ5fr4UhMwXKsWQTy1Zo59BDDaYJomUXq+YvzJgoL+AqGyJDr3UFQpP02snq4U+V3p58lMpY3p1OSZtBVz8YJmb8dikscqw7ltcsYxfH1QJY4AdOpqOrBGBMP9Sp/4NT95KXqky10/KZzTEUX8O71f7dXItZosu/k1Dxjo304mNYWONClUWysOM2KsY38WogXjnKflBonJxFxkjzVqbfnNYOG0uQGGJqL6y95MKE2HamVdQuQIeioLVQkkJqtYRE2ctcrgc4fFJCHCl7DF/TAYwPxaipi61WD52gwU=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBbLKmfFDHi5di79+f+BQyiLhK2lVlh30wI1Uyslyxav#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG66BWtreyRaMdcJhfaEMdXbENIdowrVjPHzi1kQFQ/54/5dzbscgaJHQBj7yPtorRzAjH0inKsVZ7FwDJOq3Ic=#012 create=True mode=0644 path=/tmp/ansible.4ynsx1ey state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:23:00 np0005485008 python3.9[74268]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.4ynsx1ey' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:23:01 np0005485008 python3.9[74422]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.4ynsx1ey state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:23:01 np0005485008 systemd[1]: session-18.scope: Deactivated successfully.
Oct 13 11:23:01 np0005485008 systemd[1]: session-18.scope: Consumed 3.395s CPU time.
Oct 13 11:23:01 np0005485008 systemd-logind[784]: Session 18 logged out. Waiting for processes to exit.
Oct 13 11:23:01 np0005485008 systemd-logind[784]: Removed session 18.
Oct 13 11:23:07 np0005485008 systemd-logind[784]: New session 19 of user zuul.
Oct 13 11:23:07 np0005485008 systemd[1]: Started Session 19 of User zuul.
Oct 13 11:23:08 np0005485008 python3.9[74600]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:23:09 np0005485008 python3.9[74756]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 13 11:23:10 np0005485008 python3.9[74910]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 11:23:11 np0005485008 python3.9[75063]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:23:12 np0005485008 python3.9[75216]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:23:13 np0005485008 python3.9[75370]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:23:14 np0005485008 python3.9[75525]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:23:14 np0005485008 systemd[1]: session-19.scope: Deactivated successfully.
Oct 13 11:23:14 np0005485008 systemd[1]: session-19.scope: Consumed 4.481s CPU time.
Oct 13 11:23:14 np0005485008 systemd-logind[784]: Session 19 logged out. Waiting for processes to exit.
Oct 13 11:23:14 np0005485008 systemd-logind[784]: Removed session 19.
Oct 13 11:23:19 np0005485008 systemd-logind[784]: New session 20 of user zuul.
Oct 13 11:23:19 np0005485008 systemd[1]: Started Session 20 of User zuul.
Oct 13 11:23:20 np0005485008 python3.9[75704]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:23:21 np0005485008 python3.9[75860]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 11:23:22 np0005485008 python3.9[75944]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 13 11:23:24 np0005485008 python3.9[76095]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:23:26 np0005485008 python3.9[76246]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 13 11:23:27 np0005485008 python3.9[76396]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:23:27 np0005485008 python3.9[76546]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:23:28 np0005485008 systemd[1]: session-20.scope: Deactivated successfully.
Oct 13 11:23:28 np0005485008 systemd[1]: session-20.scope: Consumed 6.136s CPU time.
Oct 13 11:23:28 np0005485008 systemd-logind[784]: Session 20 logged out. Waiting for processes to exit.
Oct 13 11:23:28 np0005485008 systemd-logind[784]: Removed session 20.
Oct 13 11:23:33 np0005485008 systemd-logind[784]: New session 21 of user zuul.
Oct 13 11:23:33 np0005485008 systemd[1]: Started Session 21 of User zuul.
Oct 13 11:23:34 np0005485008 python3.9[76724]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:23:36 np0005485008 python3.9[76880]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:23:37 np0005485008 python3.9[77032]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:23:37 np0005485008 python3.9[77184]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:23:38 np0005485008 python3.9[77307]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369017.2922547-114-173188211770004/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=45342aace422e57bf81333b92fb1e49c1a3018ff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:23:39 np0005485008 python3.9[77459]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:23:40 np0005485008 python3.9[77582]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369018.9828677-114-251455178775496/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=b242947c5c715aa7bce2b298f410ac42c2a43655 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:23:40 np0005485008 python3.9[77734]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:23:41 np0005485008 python3.9[77857]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369020.2501016-114-105255608320010/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=399dacbe3bf663547785da4540295884d894263c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:23:42 np0005485008 python3.9[78009]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:23:42 np0005485008 python3.9[78161]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:23:43 np0005485008 python3.9[78313]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:23:44 np0005485008 python3.9[78436]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369023.014-232-161697618177642/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=13584ff5c914917406d6365f1c3e97c87ee2f82f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:23:44 np0005485008 python3.9[78588]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:23:45 np0005485008 python3.9[78711]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369024.2712657-232-82463872150848/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=c39a6d2aeb8015be0211bcdc0664e41aeab49598 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:23:46 np0005485008 python3.9[78863]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:23:46 np0005485008 python3.9[78986]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369025.6952806-232-23377138116373/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=8bc3ccc6e64b0e9cc51f5d3d7bbf072940688ab4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:23:47 np0005485008 python3.9[79138]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:23:48 np0005485008 python3.9[79290]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:23:48 np0005485008 python3.9[79442]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:23:49 np0005485008 python3.9[79565]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369028.376766-351-277361542727009/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=d11fd04f5489f786b21b56d55e8763e7180faefe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:23:50 np0005485008 python3.9[79717]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:23:50 np0005485008 python3.9[79841]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369029.6747317-351-90405952296456/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=62859057cc6357ab1e40850695fbd46367be8f19 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:23:51 np0005485008 python3.9[79993]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:23:52 np0005485008 python3.9[80116]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369030.872373-351-40727716805350/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=a053b59f2594ede6781b8fa63541a83d6c87aa1c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:23:53 np0005485008 python3.9[80268]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:23:53 np0005485008 python3.9[80420]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:23:54 np0005485008 python3.9[80572]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:23:54 np0005485008 python3.9[80695]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369033.9395459-467-141176343928049/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=e740e696459db8e717cc3b8f18da23b9b71220ae backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:23:55 np0005485008 python3.9[80847]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:23:56 np0005485008 python3.9[80970]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369035.1560934-467-47880378721887/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=62859057cc6357ab1e40850695fbd46367be8f19 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:23:57 np0005485008 python3.9[81122]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:23:57 np0005485008 python3.9[81245]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369036.6799233-467-14765292751658/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=e8c0dec7b7776bed23ff0cbc159c78dc45fa7f7a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:23:59 np0005485008 python3.9[81397]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:23:59 np0005485008 python3.9[81549]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:24:00 np0005485008 python3.9[81672]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369039.3918195-595-187870830365850/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1d44c15bf0510ac267971146664e62b6bd7b43c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:24:01 np0005485008 python3.9[81824]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:24:02 np0005485008 python3.9[81976]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:24:02 np0005485008 python3.9[82099]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369041.5296078-643-258588234917677/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1d44c15bf0510ac267971146664e62b6bd7b43c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:24:03 np0005485008 python3.9[82251]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:24:04 np0005485008 python3.9[82403]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:24:04 np0005485008 python3.9[82526]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369043.6498346-691-136218124810833/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1d44c15bf0510ac267971146664e62b6bd7b43c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:24:05 np0005485008 python3.9[82678]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:24:06 np0005485008 python3.9[82830]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:24:07 np0005485008 python3.9[82953]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369045.7990577-736-167829108956192/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1d44c15bf0510ac267971146664e62b6bd7b43c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:24:07 np0005485008 python3.9[83105]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:24:08 np0005485008 python3.9[83257]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:24:09 np0005485008 python3.9[83380]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369048.0730498-783-257244857331016/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1d44c15bf0510ac267971146664e62b6bd7b43c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:24:09 np0005485008 python3.9[83532]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:24:10 np0005485008 python3.9[83684]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:24:11 np0005485008 python3.9[83807]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369050.091187-828-268068327906758/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1d44c15bf0510ac267971146664e62b6bd7b43c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:24:12 np0005485008 python3.9[83959]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:24:12 np0005485008 python3.9[84111]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:24:13 np0005485008 python3.9[84234]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369052.2388554-876-201772388845423/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1d44c15bf0510ac267971146664e62b6bd7b43c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:24:15 np0005485008 systemd-logind[784]: Session 21 logged out. Waiting for processes to exit.
Oct 13 11:24:15 np0005485008 systemd[1]: session-21.scope: Deactivated successfully.
Oct 13 11:24:15 np0005485008 systemd[1]: session-21.scope: Consumed 30.751s CPU time.
Oct 13 11:24:15 np0005485008 systemd-logind[784]: Removed session 21.
Oct 13 11:24:21 np0005485008 systemd-logind[784]: New session 22 of user zuul.
Oct 13 11:24:21 np0005485008 systemd[1]: Started Session 22 of User zuul.
Oct 13 11:24:22 np0005485008 python3.9[84412]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:24:22 np0005485008 systemd[1]: packagekit.service: Deactivated successfully.
Oct 13 11:24:23 np0005485008 python3.9[84568]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:24:24 np0005485008 python3.9[84720]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:24:25 np0005485008 python3.9[84870]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:24:26 np0005485008 python3.9[85022]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 13 11:24:28 np0005485008 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct 13 11:24:28 np0005485008 python3.9[85178]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 11:24:29 np0005485008 python3.9[85262]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 11:24:31 np0005485008 python3.9[85415]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 13 11:24:33 np0005485008 python3[85571]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct 13 11:24:34 np0005485008 python3.9[85723]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:24:35 np0005485008 python3.9[85875]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:24:36 np0005485008 python3.9[85953]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:24:37 np0005485008 python3.9[86105]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:24:37 np0005485008 python3.9[86183]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.nqx9qs_q recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:24:38 np0005485008 python3.9[86335]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:24:38 np0005485008 python3.9[86413]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:24:39 np0005485008 python3.9[86565]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:24:40 np0005485008 python3[86718]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 13 11:24:41 np0005485008 python3.9[86870]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:24:42 np0005485008 python3.9[86995]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369081.2208035-300-155544013341440/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:24:43 np0005485008 python3.9[87147]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:24:43 np0005485008 python3.9[87272]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369082.772861-330-204728381249589/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:24:44 np0005485008 python3.9[87424]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:24:45 np0005485008 python3.9[87549]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369084.2485092-360-10432886699553/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:24:46 np0005485008 python3.9[87701]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:24:46 np0005485008 python3.9[87826]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369085.718578-390-53654412692824/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:24:47 np0005485008 python3.9[87978]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:24:48 np0005485008 python3.9[88103]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369087.0583668-420-85713038203909/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:24:49 np0005485008 python3.9[88255]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:24:49 np0005485008 python3.9[88407]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:24:50 np0005485008 python3.9[88562]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:24:51 np0005485008 python3.9[88714]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:24:52 np0005485008 python3.9[88867]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:24:52 np0005485008 python3.9[89021]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:24:54 np0005485008 python3.9[89176]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:24:55 np0005485008 python3.9[89326]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:24:56 np0005485008 python3.9[89479]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:c0:16:5a:16" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:24:56 np0005485008 ovs-vsctl[89480]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:c0:16:5a:16 external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct 13 11:24:57 np0005485008 python3.9[89632]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:24:58 np0005485008 python3.9[89787]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:24:58 np0005485008 ovs-vsctl[89788]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct 13 11:24:59 np0005485008 python3.9[89938]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:25:00 np0005485008 python3.9[90092]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:25:00 np0005485008 python3.9[90244]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:25:01 np0005485008 python3.9[90322]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:25:02 np0005485008 python3.9[90474]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:25:02 np0005485008 python3.9[90552]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:25:03 np0005485008 python3.9[90704]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:25:04 np0005485008 python3.9[90856]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:25:05 np0005485008 python3.9[90934]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:25:05 np0005485008 python3.9[91086]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:25:06 np0005485008 python3.9[91164]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:25:07 np0005485008 python3.9[91316]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:25:07 np0005485008 systemd[1]: Reloading.
Oct 13 11:25:07 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:25:07 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:25:08 np0005485008 python3.9[91507]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:25:08 np0005485008 python3.9[91585]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:25:09 np0005485008 python3.9[91737]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:25:10 np0005485008 python3.9[91815]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:25:11 np0005485008 python3.9[91967]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:25:11 np0005485008 systemd[1]: Reloading.
Oct 13 11:25:11 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:25:11 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:25:11 np0005485008 systemd[1]: Starting Create netns directory...
Oct 13 11:25:11 np0005485008 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 13 11:25:11 np0005485008 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 13 11:25:11 np0005485008 systemd[1]: Finished Create netns directory.
Oct 13 11:25:12 np0005485008 python3.9[92160]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:25:13 np0005485008 python3.9[92312]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:25:13 np0005485008 python3.9[92435]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760369112.7169812-922-127663569045699/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:25:15 np0005485008 python3.9[92587]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:25:15 np0005485008 python3.9[92739]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:25:16 np0005485008 python3.9[92862]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760369115.3107736-972-280227258553086/.source.json _original_basename=.2o391fz8 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:25:17 np0005485008 python3.9[93014]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:25:19 np0005485008 python3.9[93442]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct 13 11:25:20 np0005485008 python3.9[93594]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 13 11:25:21 np0005485008 python3.9[93746]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 13 11:25:21 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:25:23 np0005485008 python3[93908]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 13 11:25:23 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:25:23 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:25:23 np0005485008 podman[93945]: 2025-10-13 15:25:23.643620382 +0000 UTC m=+0.069425843 container create a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 13 11:25:23 np0005485008 podman[93945]: 2025-10-13 15:25:23.610352396 +0000 UTC m=+0.036157947 image pull 96ad696e7914500f1daa441ab9a026a0f524ff8aa3b224853f7d517ebfe3b2e5 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 13 11:25:23 np0005485008 python3[93908]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 13 11:25:24 np0005485008 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 13 11:25:24 np0005485008 python3.9[94135]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:25:25 np0005485008 python3.9[94289]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:25:26 np0005485008 python3.9[94365]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:25:27 np0005485008 python3.9[94516]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760369126.3241355-1148-206461137259762/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:25:27 np0005485008 python3.9[94592]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 11:25:27 np0005485008 systemd[1]: Reloading.
Oct 13 11:25:27 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:25:27 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:25:28 np0005485008 python3.9[94702]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:25:28 np0005485008 systemd[1]: Reloading.
Oct 13 11:25:28 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:25:28 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:25:28 np0005485008 systemd[1]: Starting ovn_controller container...
Oct 13 11:25:28 np0005485008 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct 13 11:25:28 np0005485008 systemd[1]: Started libcrun container.
Oct 13 11:25:28 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbb85a98c07302548cf9f62cb47ae1db2abfc34b46b7f8d9b604596c22bf68fc/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 13 11:25:28 np0005485008 systemd[1]: Started /usr/bin/podman healthcheck run a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035.
Oct 13 11:25:28 np0005485008 podman[94743]: 2025-10-13 15:25:28.978891371 +0000 UTC m=+0.150913763 container init a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 13 11:25:28 np0005485008 ovn_controller[94758]: + sudo -E kolla_set_configs
Oct 13 11:25:29 np0005485008 podman[94743]: 2025-10-13 15:25:29.015160451 +0000 UTC m=+0.187182833 container start a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 11:25:29 np0005485008 edpm-start-podman-container[94743]: ovn_controller
Oct 13 11:25:29 np0005485008 systemd[1]: Created slice User Slice of UID 0.
Oct 13 11:25:29 np0005485008 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 13 11:25:29 np0005485008 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 13 11:25:29 np0005485008 systemd[1]: Starting User Manager for UID 0...
Oct 13 11:25:29 np0005485008 edpm-start-podman-container[94742]: Creating additional drop-in dependency for "ovn_controller" (a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035)
Oct 13 11:25:29 np0005485008 systemd[1]: Reloading.
Oct 13 11:25:29 np0005485008 podman[94764]: 2025-10-13 15:25:29.122182234 +0000 UTC m=+0.088824593 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 11:25:29 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:25:29 np0005485008 systemd[94792]: Queued start job for default target Main User Target.
Oct 13 11:25:29 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:25:29 np0005485008 systemd[94792]: Created slice User Application Slice.
Oct 13 11:25:29 np0005485008 systemd[94792]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 13 11:25:29 np0005485008 systemd[94792]: Started Daily Cleanup of User's Temporary Directories.
Oct 13 11:25:29 np0005485008 systemd[94792]: Reached target Paths.
Oct 13 11:25:29 np0005485008 systemd[94792]: Reached target Timers.
Oct 13 11:25:29 np0005485008 systemd[94792]: Starting D-Bus User Message Bus Socket...
Oct 13 11:25:29 np0005485008 systemd[94792]: Starting Create User's Volatile Files and Directories...
Oct 13 11:25:29 np0005485008 systemd[94792]: Finished Create User's Volatile Files and Directories.
Oct 13 11:25:29 np0005485008 systemd[94792]: Listening on D-Bus User Message Bus Socket.
Oct 13 11:25:29 np0005485008 systemd[94792]: Reached target Sockets.
Oct 13 11:25:29 np0005485008 systemd[94792]: Reached target Basic System.
Oct 13 11:25:29 np0005485008 systemd[94792]: Reached target Main User Target.
Oct 13 11:25:29 np0005485008 systemd[94792]: Startup finished in 132ms.
Oct 13 11:25:29 np0005485008 systemd[1]: Started User Manager for UID 0.
Oct 13 11:25:29 np0005485008 systemd[1]: Started ovn_controller container.
Oct 13 11:25:29 np0005485008 systemd[1]: a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035-40db0127b0747dba.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 11:25:29 np0005485008 systemd[1]: a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035-40db0127b0747dba.service: Failed with result 'exit-code'.
Oct 13 11:25:29 np0005485008 systemd[1]: Started Session c1 of User root.
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: INFO:__main__:Validating config file
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: INFO:__main__:Writing out command to execute
Oct 13 11:25:29 np0005485008 systemd[1]: session-c1.scope: Deactivated successfully.
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: ++ cat /run_command
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: + ARGS=
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: + sudo kolla_copy_cacerts
Oct 13 11:25:29 np0005485008 systemd[1]: Started Session c2 of User root.
Oct 13 11:25:29 np0005485008 systemd[1]: session-c2.scope: Deactivated successfully.
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: + [[ ! -n '' ]]
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: + . kolla_extend_start
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: + umask 0022
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct 13 11:25:29 np0005485008 NetworkManager[51587]: <info>  [1760369129.5228] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct 13 11:25:29 np0005485008 NetworkManager[51587]: <info>  [1760369129.5237] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 13 11:25:29 np0005485008 NetworkManager[51587]: <info>  [1760369129.5251] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Oct 13 11:25:29 np0005485008 NetworkManager[51587]: <info>  [1760369129.5256] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Oct 13 11:25:29 np0005485008 NetworkManager[51587]: <info>  [1760369129.5261] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 13 11:25:29 np0005485008 kernel: br-int: entered promiscuous mode
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00022|main|INFO|OVS feature set changed, force recompute.
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00001|statctrl(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00002|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 13 11:25:29 np0005485008 NetworkManager[51587]: <info>  [1760369129.5538] manager: (ovn-bbbaff-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct 13 11:25:29 np0005485008 systemd-udevd[94894]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:25:29 np0005485008 systemd-udevd[94896]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:25:29 np0005485008 kernel: genev_sys_6081: entered promiscuous mode
Oct 13 11:25:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:29Z|00003|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 13 11:25:29 np0005485008 NetworkManager[51587]: <info>  [1760369129.5744] device (genev_sys_6081): carrier: link connected
Oct 13 11:25:29 np0005485008 NetworkManager[51587]: <info>  [1760369129.5748] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Oct 13 11:25:30 np0005485008 NetworkManager[51587]: <info>  [1760369130.0392] manager: (ovn-6236ce-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct 13 11:25:30 np0005485008 python3.9[95026]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:25:30 np0005485008 ovs-vsctl[95027]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct 13 11:25:31 np0005485008 python3.9[95179]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:25:31 np0005485008 ovs-vsctl[95181]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct 13 11:25:32 np0005485008 python3.9[95334]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:25:32 np0005485008 ovs-vsctl[95335]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct 13 11:25:33 np0005485008 systemd[1]: session-22.scope: Deactivated successfully.
Oct 13 11:25:33 np0005485008 systemd[1]: session-22.scope: Consumed 47.564s CPU time.
Oct 13 11:25:33 np0005485008 systemd-logind[784]: Session 22 logged out. Waiting for processes to exit.
Oct 13 11:25:33 np0005485008 systemd-logind[784]: Removed session 22.
Oct 13 11:25:38 np0005485008 systemd-logind[784]: New session 24 of user zuul.
Oct 13 11:25:38 np0005485008 systemd[1]: Started Session 24 of User zuul.
Oct 13 11:25:39 np0005485008 systemd[1]: Stopping User Manager for UID 0...
Oct 13 11:25:39 np0005485008 systemd[94792]: Activating special unit Exit the Session...
Oct 13 11:25:39 np0005485008 systemd[94792]: Stopped target Main User Target.
Oct 13 11:25:39 np0005485008 systemd[94792]: Stopped target Basic System.
Oct 13 11:25:39 np0005485008 systemd[94792]: Stopped target Paths.
Oct 13 11:25:39 np0005485008 systemd[94792]: Stopped target Sockets.
Oct 13 11:25:39 np0005485008 systemd[94792]: Stopped target Timers.
Oct 13 11:25:39 np0005485008 systemd[94792]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 13 11:25:39 np0005485008 systemd[94792]: Closed D-Bus User Message Bus Socket.
Oct 13 11:25:39 np0005485008 systemd[94792]: Stopped Create User's Volatile Files and Directories.
Oct 13 11:25:39 np0005485008 systemd[94792]: Removed slice User Application Slice.
Oct 13 11:25:39 np0005485008 systemd[94792]: Reached target Shutdown.
Oct 13 11:25:39 np0005485008 systemd[94792]: Finished Exit the Session.
Oct 13 11:25:39 np0005485008 systemd[94792]: Reached target Exit the Session.
Oct 13 11:25:39 np0005485008 systemd[1]: user@0.service: Deactivated successfully.
Oct 13 11:25:39 np0005485008 systemd[1]: Stopped User Manager for UID 0.
Oct 13 11:25:39 np0005485008 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 13 11:25:39 np0005485008 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 13 11:25:39 np0005485008 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 13 11:25:39 np0005485008 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 13 11:25:39 np0005485008 systemd[1]: Removed slice User Slice of UID 0.
Oct 13 11:25:39 np0005485008 python3.9[95515]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:25:41 np0005485008 python3.9[95672]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:25:42 np0005485008 python3.9[95824]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:25:42 np0005485008 python3.9[95976]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:25:43 np0005485008 python3.9[96128]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:25:44 np0005485008 python3.9[96280]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:25:45 np0005485008 python3.9[96430]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:25:46 np0005485008 python3.9[96582]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 13 11:25:47 np0005485008 python3.9[96733]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:25:48 np0005485008 python3.9[96854]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760369147.2508411-158-28432036793725/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:25:49 np0005485008 python3.9[97004]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:25:49 np0005485008 python3.9[97125]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760369148.9443867-188-246521502510437/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:25:50 np0005485008 python3.9[97277]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 11:25:51 np0005485008 python3.9[97361]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 11:25:54 np0005485008 python3.9[97514]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 13 11:25:55 np0005485008 python3.9[97667]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:25:55 np0005485008 python3.9[97788]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760369154.8323708-262-175237751517042/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:25:56 np0005485008 python3.9[97938]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:25:57 np0005485008 python3.9[98059]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760369156.0967705-262-11747923008268/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:25:58 np0005485008 python3.9[98209]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:25:59 np0005485008 python3.9[98330]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760369158.4434302-352-134501472480546/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:25:59 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:59Z|00025|memory|INFO|16384 kB peak resident set size after 30.1 seconds
Oct 13 11:25:59 np0005485008 ovn_controller[94758]: 2025-10-13T15:25:59Z|00026|memory|INFO|idl-cells-OVN_Southbound:256 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:6 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:2
Oct 13 11:25:59 np0005485008 podman[98331]: 2025-10-13 15:25:59.645086133 +0000 UTC m=+0.103163403 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct 13 11:26:00 np0005485008 python3.9[98506]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:26:00 np0005485008 python3.9[98627]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760369159.6808863-352-61948276583495/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:26:01 np0005485008 python3.9[98777]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:26:02 np0005485008 python3.9[98931]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:26:03 np0005485008 python3.9[99083]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:26:04 np0005485008 python3.9[99161]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:26:04 np0005485008 python3.9[99313]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:26:05 np0005485008 python3.9[99391]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:26:06 np0005485008 python3.9[99543]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:26:06 np0005485008 python3.9[99695]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:26:07 np0005485008 python3.9[99773]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:26:08 np0005485008 python3.9[99925]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:26:08 np0005485008 python3.9[100003]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:26:09 np0005485008 python3.9[100155]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:26:09 np0005485008 systemd[1]: Reloading.
Oct 13 11:26:09 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:26:09 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:26:10 np0005485008 python3.9[100344]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:26:11 np0005485008 python3.9[100422]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:26:12 np0005485008 python3.9[100574]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:26:12 np0005485008 python3.9[100652]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:26:13 np0005485008 python3.9[100804]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:26:13 np0005485008 systemd[1]: Reloading.
Oct 13 11:26:13 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:26:13 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:26:13 np0005485008 systemd[1]: Starting Create netns directory...
Oct 13 11:26:13 np0005485008 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 13 11:26:13 np0005485008 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 13 11:26:13 np0005485008 systemd[1]: Finished Create netns directory.
Oct 13 11:26:14 np0005485008 python3.9[100997]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:26:15 np0005485008 python3.9[101149]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:26:16 np0005485008 python3.9[101272]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760369175.0551767-652-195112109774781/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:26:17 np0005485008 python3.9[101424]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:26:18 np0005485008 python3.9[101576]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:26:18 np0005485008 python3.9[101699]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760369177.4916139-702-138002281136365/.source.json _original_basename=.tdxb5rj5 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:26:19 np0005485008 python3.9[101851]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:26:22 np0005485008 python3.9[102278]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct 13 11:26:23 np0005485008 python3.9[102430]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 13 11:26:24 np0005485008 python3.9[102582]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 13 11:26:26 np0005485008 python3[102760]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 13 11:26:26 np0005485008 podman[102798]: 2025-10-13 15:26:26.553119214 +0000 UTC m=+0.061598996 container create 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 11:26:26 np0005485008 podman[102798]: 2025-10-13 15:26:26.517936896 +0000 UTC m=+0.026416728 image pull f5d8e43d5fd113645e71c7e8980543c233298a7561a4c95ab3af27cb119e70b7 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 13 11:26:26 np0005485008 python3[102760]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 13 11:26:27 np0005485008 python3.9[102988]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:26:28 np0005485008 python3.9[103142]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:26:28 np0005485008 python3.9[103218]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:26:29 np0005485008 python3.9[103369]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760369188.93378-878-215577762798461/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:26:29 np0005485008 podman[103370]: 2025-10-13 15:26:29.803486913 +0000 UTC m=+0.105477243 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 11:26:30 np0005485008 python3.9[103470]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 11:26:30 np0005485008 systemd[1]: Reloading.
Oct 13 11:26:30 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:26:30 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:26:31 np0005485008 python3.9[103581]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:26:31 np0005485008 systemd[1]: Reloading.
Oct 13 11:26:31 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:26:31 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:26:31 np0005485008 systemd[1]: Starting ovn_metadata_agent container...
Oct 13 11:26:31 np0005485008 systemd[1]: Started libcrun container.
Oct 13 11:26:31 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d02383caeb2d45143797dd361f3524ddbae2579d9acd5e15c979494482e419d/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 13 11:26:31 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d02383caeb2d45143797dd361f3524ddbae2579d9acd5e15c979494482e419d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 11:26:31 np0005485008 systemd[1]: Started /usr/bin/podman healthcheck run 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225.
Oct 13 11:26:31 np0005485008 podman[103622]: 2025-10-13 15:26:31.750546445 +0000 UTC m=+0.133390517 container init 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: + sudo -E kolla_set_configs
Oct 13 11:26:31 np0005485008 podman[103622]: 2025-10-13 15:26:31.789991076 +0000 UTC m=+0.172835048 container start 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 11:26:31 np0005485008 edpm-start-podman-container[103622]: ovn_metadata_agent
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: INFO:__main__:Validating config file
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: INFO:__main__:Copying service configuration files
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: INFO:__main__:Writing out command to execute
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: ++ cat /run_command
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: + CMD=neutron-ovn-metadata-agent
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: + ARGS=
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: + sudo kolla_copy_cacerts
Oct 13 11:26:31 np0005485008 podman[103643]: 2025-10-13 15:26:31.858651729 +0000 UTC m=+0.054557048 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 13 11:26:31 np0005485008 edpm-start-podman-container[103621]: Creating additional drop-in dependency for "ovn_metadata_agent" (94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225)
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: + [[ ! -n '' ]]
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: + . kolla_extend_start
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: Running command: 'neutron-ovn-metadata-agent'
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: + umask 0022
Oct 13 11:26:31 np0005485008 ovn_metadata_agent[103637]: + exec neutron-ovn-metadata-agent
Oct 13 11:26:31 np0005485008 systemd[1]: Reloading.
Oct 13 11:26:31 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:26:31 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:26:32 np0005485008 systemd[1]: Started ovn_metadata_agent container.
Oct 13 11:26:32 np0005485008 systemd[1]: session-24.scope: Deactivated successfully.
Oct 13 11:26:32 np0005485008 systemd[1]: session-24.scope: Consumed 36.725s CPU time.
Oct 13 11:26:32 np0005485008 systemd-logind[784]: Session 24 logged out. Waiting for processes to exit.
Oct 13 11:26:32 np0005485008 systemd-logind[784]: Removed session 24.
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.881 103642 INFO neutron.common.config [-] Logging enabled!#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.881 103642 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.881 103642 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.882 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.882 103642 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.882 103642 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.882 103642 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.882 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.883 103642 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.883 103642 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.883 103642 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.883 103642 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.883 103642 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.883 103642 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.883 103642 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.883 103642 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.883 103642 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.884 103642 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.884 103642 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.884 103642 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.884 103642 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.884 103642 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.884 103642 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.884 103642 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.885 103642 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.885 103642 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.885 103642 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.885 103642 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.885 103642 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.885 103642 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.885 103642 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.885 103642 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.886 103642 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.886 103642 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.886 103642 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.886 103642 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.886 103642 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.886 103642 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.887 103642 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.887 103642 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.887 103642 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.887 103642 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.887 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.887 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.887 103642 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.888 103642 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.888 103642 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.888 103642 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.888 103642 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.888 103642 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.888 103642 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.888 103642 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.888 103642 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.888 103642 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.888 103642 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.889 103642 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.889 103642 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.889 103642 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.889 103642 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.889 103642 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.889 103642 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.889 103642 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.889 103642 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.889 103642 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.890 103642 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.890 103642 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.890 103642 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.890 103642 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.890 103642 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.890 103642 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.890 103642 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.890 103642 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.890 103642 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.891 103642 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.891 103642 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.891 103642 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.891 103642 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.891 103642 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.891 103642 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.891 103642 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.891 103642 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.892 103642 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.892 103642 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.892 103642 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.892 103642 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.892 103642 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.892 103642 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.892 103642 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.892 103642 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.892 103642 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.892 103642 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.893 103642 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.893 103642 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.893 103642 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.893 103642 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.893 103642 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.893 103642 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.893 103642 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.893 103642 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.893 103642 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.893 103642 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.893 103642 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.894 103642 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.894 103642 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.894 103642 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.894 103642 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.894 103642 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.894 103642 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.894 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.894 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.895 103642 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.895 103642 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.895 103642 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.895 103642 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.895 103642 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.895 103642 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.895 103642 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.895 103642 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.895 103642 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.896 103642 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.896 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.896 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.896 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.896 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.896 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.896 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.896 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.896 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.897 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.897 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.897 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.897 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.897 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.897 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.897 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.897 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.897 103642 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.898 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.898 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.898 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.898 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.898 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.898 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.898 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.898 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.898 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.899 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.899 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.899 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.899 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.899 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.899 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.899 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.899 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.899 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.900 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.900 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.900 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.900 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.900 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.900 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.900 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.900 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.900 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.901 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.901 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.901 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.901 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.901 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.901 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.901 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.901 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.901 103642 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.902 103642 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.902 103642 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.902 103642 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.902 103642 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.902 103642 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.902 103642 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.902 103642 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.902 103642 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.903 103642 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.903 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.903 103642 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.903 103642 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.903 103642 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.903 103642 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.903 103642 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.903 103642 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.903 103642 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.904 103642 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.904 103642 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.904 103642 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.904 103642 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.904 103642 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.904 103642 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.904 103642 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.904 103642 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.905 103642 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.905 103642 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.905 103642 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.905 103642 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.905 103642 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.905 103642 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.905 103642 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.905 103642 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.905 103642 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.905 103642 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.906 103642 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.906 103642 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.906 103642 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.906 103642 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.906 103642 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.906 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.906 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.906 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.907 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.907 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.907 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.907 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.907 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.907 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.907 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.907 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.908 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.908 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.908 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.908 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.908 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.908 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.908 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.908 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.908 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.909 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.909 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.909 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.909 103642 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.909 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.909 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.909 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.909 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.909 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.910 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.910 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.910 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.910 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.910 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.910 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.910 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.910 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.910 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.911 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.911 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.911 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.911 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.911 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.911 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.911 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.911 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.911 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.912 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.912 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.912 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.912 103642 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.912 103642 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.912 103642 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.912 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.912 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.912 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.913 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.913 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.913 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.913 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.913 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.913 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.913 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.914 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.914 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.914 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.914 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.914 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.914 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.915 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.915 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.915 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.915 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.915 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.915 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.915 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.916 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.916 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.916 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.916 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.916 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.916 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.916 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.917 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.917 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.917 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.917 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.917 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.917 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.917 103642 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.917 103642 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.928 103642 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.929 103642 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.929 103642 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.929 103642 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.930 103642 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.949 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name e8c98390-b106-43ff-9736-5afcb5548264 (UUID: e8c98390-b106-43ff-9736-5afcb5548264) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.978 103642 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.979 103642 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.979 103642 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.979 103642 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.983 103642 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.990 103642 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.996 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'e8c98390-b106-43ff-9736-5afcb5548264'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], external_ids={}, name=e8c98390-b106-43ff-9736-5afcb5548264, nb_cfg_timestamp=1760369137552, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.997 103642 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f01ba4b8af0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.999 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.999 103642 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.999 103642 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:33.999 103642 INFO oslo_service.service [-] Starting 1 workers#033[00m
Oct 13 11:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:34.006 103642 DEBUG oslo_service.service [-] Started child 103752 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Oct 13 11:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:34.010 103642 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpn5wdjvgn/privsep.sock']#033[00m
Oct 13 11:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:34.010 103752 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-234169'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Oct 13 11:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:34.043 103752 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct 13 11:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:34.043 103752 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct 13 11:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:34.043 103752 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 13 11:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:34.047 103752 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct 13 11:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:34.053 103752 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct 13 11:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:34.059 103752 INFO eventlet.wsgi.server [-] (103752) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Oct 13 11:26:34 np0005485008 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct 13 11:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:34.808 103642 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct 13 11:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:34.809 103642 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpn5wdjvgn/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct 13 11:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:34.651 103757 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 13 11:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:34.656 103757 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 13 11:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:34.658 103757 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Oct 13 11:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:34.659 103757 INFO oslo.privsep.daemon [-] privsep daemon running as pid 103757#033[00m
Oct 13 11:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:34.812 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[e2231354-19a1-4de4-a97c-d69a1a4363fd]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.318 103757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.318 103757 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.319 103757 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.908 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[11f2cd1f-81ea-45b3-8523-909527986787]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.911 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, column=external_ids, values=({'neutron:ovn-metadata-id': '8991c42c-369e-5832-94a0-89d5315034f1'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.918 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.923 103642 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.924 103642 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.924 103642 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.924 103642 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.924 103642 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.924 103642 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.924 103642 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.924 103642 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.924 103642 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.925 103642 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.925 103642 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.925 103642 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.925 103642 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.925 103642 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.925 103642 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.925 103642 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.925 103642 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.926 103642 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.926 103642 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.926 103642 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.926 103642 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.926 103642 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.926 103642 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.926 103642 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.926 103642 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.926 103642 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.927 103642 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.927 103642 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.927 103642 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.927 103642 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.927 103642 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.927 103642 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.927 103642 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.927 103642 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.928 103642 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.928 103642 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.928 103642 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.928 103642 DEBUG oslo_service.service [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.928 103642 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.928 103642 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.928 103642 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.928 103642 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.928 103642 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.929 103642 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.929 103642 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.929 103642 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.929 103642 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.929 103642 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.929 103642 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.929 103642 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.929 103642 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.929 103642 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.929 103642 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.930 103642 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.930 103642 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.930 103642 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.930 103642 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.930 103642 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.930 103642 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.930 103642 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.930 103642 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.930 103642 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.930 103642 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.931 103642 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.931 103642 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.931 103642 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.931 103642 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.931 103642 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.931 103642 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.931 103642 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.931 103642 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.931 103642 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.931 103642 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.932 103642 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.932 103642 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.932 103642 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.932 103642 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.932 103642 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.932 103642 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.932 103642 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.932 103642 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.932 103642 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.933 103642 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.933 103642 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.933 103642 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.933 103642 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.933 103642 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.933 103642 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.933 103642 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.933 103642 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.933 103642 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.933 103642 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.934 103642 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.934 103642 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.934 103642 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.934 103642 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.934 103642 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.934 103642 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.934 103642 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.934 103642 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.934 103642 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.934 103642 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.935 103642 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.935 103642 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.935 103642 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.935 103642 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.935 103642 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.935 103642 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.935 103642 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.935 103642 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.936 103642 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.936 103642 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.936 103642 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.936 103642 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.936 103642 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.936 103642 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.936 103642 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.936 103642 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.937 103642 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.937 103642 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.937 103642 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.937 103642 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.937 103642 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.937 103642 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.938 103642 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.938 103642 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.938 103642 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.938 103642 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.938 103642 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.938 103642 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.938 103642 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.938 103642 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.939 103642 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.939 103642 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.939 103642 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.939 103642 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.939 103642 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.939 103642 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.939 103642 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.939 103642 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.939 103642 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.940 103642 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.940 103642 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.940 103642 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.940 103642 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.940 103642 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.940 103642 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.940 103642 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.940 103642 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.940 103642 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.940 103642 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.941 103642 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.941 103642 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.941 103642 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.941 103642 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.941 103642 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.941 103642 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.941 103642 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.941 103642 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.941 103642 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.942 103642 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.942 103642 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.942 103642 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.942 103642 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.942 103642 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.942 103642 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.942 103642 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.942 103642 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.942 103642 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.942 103642 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.943 103642 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.943 103642 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.943 103642 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.943 103642 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.943 103642 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.943 103642 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.943 103642 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.943 103642 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.943 103642 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.944 103642 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.944 103642 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.944 103642 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.944 103642 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.944 103642 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.944 103642 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.944 103642 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.944 103642 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.944 103642 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.945 103642 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.945 103642 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.945 103642 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.945 103642 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.945 103642 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.945 103642 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.945 103642 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.946 103642 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.946 103642 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.946 103642 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.946 103642 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.946 103642 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.946 103642 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.946 103642 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.946 103642 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.947 103642 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.947 103642 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.947 103642 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.947 103642 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.947 103642 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.947 103642 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.947 103642 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.947 103642 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.947 103642 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.948 103642 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.948 103642 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.948 103642 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.948 103642 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.948 103642 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.948 103642 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.948 103642 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.949 103642 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.949 103642 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.949 103642 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.949 103642 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.949 103642 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.949 103642 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.949 103642 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.949 103642 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.949 103642 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.950 103642 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.950 103642 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.950 103642 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.950 103642 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.950 103642 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.950 103642 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.950 103642 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.950 103642 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.950 103642 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.951 103642 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.951 103642 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.951 103642 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.951 103642 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.951 103642 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.951 103642 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.951 103642 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.951 103642 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.952 103642 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.952 103642 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.952 103642 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.952 103642 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.952 103642 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.952 103642 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.952 103642 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.952 103642 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.952 103642 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.953 103642 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.953 103642 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.953 103642 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.953 103642 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.953 103642 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.953 103642 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.953 103642 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.953 103642 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.953 103642 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.954 103642 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.954 103642 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.954 103642 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.954 103642 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.954 103642 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.954 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.954 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.954 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.954 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.955 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.955 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.955 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.955 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.955 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.955 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.955 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.955 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.955 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.956 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.956 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.956 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.956 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.956 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.956 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.956 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.956 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.956 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.956 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.957 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.957 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.957 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.957 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.957 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.957 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.957 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.957 103642 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.957 103642 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.957 103642 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.958 103642 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.958 103642 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:26:35.958 103642 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct 13 11:26:38 np0005485008 systemd-logind[784]: New session 25 of user zuul.
Oct 13 11:26:38 np0005485008 systemd[1]: Started Session 25 of User zuul.
Oct 13 11:26:39 np0005485008 python3.9[103915]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:26:40 np0005485008 python3.9[104071]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:26:42 np0005485008 python3.9[104236]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 11:26:42 np0005485008 systemd[1]: Reloading.
Oct 13 11:26:42 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:26:42 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:26:43 np0005485008 python3.9[104423]: ansible-ansible.builtin.service_facts Invoked
Oct 13 11:26:43 np0005485008 network[104440]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 13 11:26:43 np0005485008 network[104441]: 'network-scripts' will be removed from distribution in near future.
Oct 13 11:26:43 np0005485008 network[104442]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 13 11:26:48 np0005485008 python3.9[104706]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:26:49 np0005485008 python3.9[104859]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:26:50 np0005485008 python3.9[105012]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:26:51 np0005485008 python3.9[105165]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:26:51 np0005485008 python3.9[105318]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:26:52 np0005485008 python3.9[105471]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:26:53 np0005485008 python3.9[105624]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:26:55 np0005485008 python3.9[105778]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:26:56 np0005485008 python3.9[105930]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:26:56 np0005485008 python3.9[106082]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:26:57 np0005485008 python3.9[106234]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:26:57 np0005485008 python3.9[106386]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:26:58 np0005485008 python3.9[106538]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:26:59 np0005485008 python3.9[106690]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:27:00 np0005485008 podman[106814]: 2025-10-13 15:27:00.344738738 +0000 UTC m=+0.093923816 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Oct 13 11:27:00 np0005485008 python3.9[106861]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:27:01 np0005485008 python3.9[107021]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:27:01 np0005485008 python3.9[107173]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:27:02 np0005485008 podman[107297]: 2025-10-13 15:27:02.422831923 +0000 UTC m=+0.056398176 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 13 11:27:02 np0005485008 python3.9[107340]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:27:03 np0005485008 python3.9[107496]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:27:04 np0005485008 python3.9[107648]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:27:04 np0005485008 python3.9[107800]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:27:06 np0005485008 python3.9[107952]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:27:07 np0005485008 python3.9[108104]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 13 11:27:08 np0005485008 python3.9[108256]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 11:27:08 np0005485008 systemd[1]: Reloading.
Oct 13 11:27:08 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:27:08 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:27:09 np0005485008 python3.9[108443]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:27:10 np0005485008 python3.9[108596]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:27:11 np0005485008 python3.9[108749]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:27:11 np0005485008 python3.9[108902]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:27:12 np0005485008 python3.9[109055]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:27:13 np0005485008 python3.9[109208]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:27:13 np0005485008 python3.9[109361]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:27:16 np0005485008 python3.9[109514]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct 13 11:27:17 np0005485008 python3.9[109667]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 13 11:27:18 np0005485008 python3.9[109825]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 13 11:27:20 np0005485008 python3.9[109985]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 11:27:21 np0005485008 python3.9[110069]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 11:27:30 np0005485008 podman[110156]: 2025-10-13 15:27:30.81038857 +0000 UTC m=+0.102607013 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:27:32 np0005485008 podman[110256]: 2025-10-13 15:27:32.747604363 +0000 UTC m=+0.050038193 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 11:27:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:27:33.921 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:27:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:27:33.922 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:27:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:27:33.922 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:27:48 np0005485008 kernel: SELinux:  Converting 2752 SID table entries...
Oct 13 11:27:48 np0005485008 kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 11:27:48 np0005485008 kernel: SELinux:  policy capability open_perms=1
Oct 13 11:27:48 np0005485008 kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 11:27:48 np0005485008 kernel: SELinux:  policy capability always_check_network=0
Oct 13 11:27:48 np0005485008 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 11:27:48 np0005485008 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 11:27:48 np0005485008 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 11:27:57 np0005485008 kernel: SELinux:  Converting 2752 SID table entries...
Oct 13 11:27:57 np0005485008 kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 11:27:57 np0005485008 kernel: SELinux:  policy capability open_perms=1
Oct 13 11:27:57 np0005485008 kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 11:27:57 np0005485008 kernel: SELinux:  policy capability always_check_network=0
Oct 13 11:27:57 np0005485008 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 11:27:57 np0005485008 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 11:27:57 np0005485008 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 11:28:01 np0005485008 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct 13 11:28:01 np0005485008 podman[110321]: 2025-10-13 15:28:01.806888182 +0000 UTC m=+0.100043415 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 13 11:28:03 np0005485008 podman[110346]: 2025-10-13 15:28:03.741775265 +0000 UTC m=+0.050607111 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 13 11:28:32 np0005485008 podman[124616]: 2025-10-13 15:28:32.802397214 +0000 UTC m=+0.093515613 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 11:28:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:28:33.925 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:28:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:28:33.929 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:28:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:28:33.929 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:28:34 np0005485008 podman[125860]: 2025-10-13 15:28:34.741927309 +0000 UTC m=+0.047872346 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 13 11:28:49 np0005485008 kernel: SELinux:  Converting 2753 SID table entries...
Oct 13 11:28:49 np0005485008 kernel: SELinux:  policy capability network_peer_controls=1
Oct 13 11:28:49 np0005485008 kernel: SELinux:  policy capability open_perms=1
Oct 13 11:28:49 np0005485008 kernel: SELinux:  policy capability extended_socket_class=1
Oct 13 11:28:49 np0005485008 kernel: SELinux:  policy capability always_check_network=0
Oct 13 11:28:49 np0005485008 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 13 11:28:49 np0005485008 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 13 11:28:49 np0005485008 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 13 11:28:50 np0005485008 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Oct 13 11:28:50 np0005485008 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct 13 11:28:50 np0005485008 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Oct 13 11:28:50 np0005485008 dbus-broker-launch[764]: Noticed file-system modification, trigger reload.
Oct 13 11:28:57 np0005485008 systemd[1]: Stopping OpenSSH server daemon...
Oct 13 11:28:57 np0005485008 systemd[1]: sshd.service: Deactivated successfully.
Oct 13 11:28:57 np0005485008 systemd[1]: Stopped OpenSSH server daemon.
Oct 13 11:28:57 np0005485008 systemd[1]: sshd.service: Consumed 1.372s CPU time, no IO.
Oct 13 11:28:57 np0005485008 systemd[1]: Stopped target sshd-keygen.target.
Oct 13 11:28:57 np0005485008 systemd[1]: Stopping sshd-keygen.target...
Oct 13 11:28:57 np0005485008 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 13 11:28:57 np0005485008 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 13 11:28:57 np0005485008 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 13 11:28:57 np0005485008 systemd[1]: Reached target sshd-keygen.target.
Oct 13 11:28:57 np0005485008 systemd[1]: Starting OpenSSH server daemon...
Oct 13 11:28:57 np0005485008 systemd[1]: Started OpenSSH server daemon.
Oct 13 11:28:59 np0005485008 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 11:28:59 np0005485008 systemd[1]: Starting man-db-cache-update.service...
Oct 13 11:28:59 np0005485008 systemd[1]: Reloading.
Oct 13 11:29:00 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:29:00 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:29:00 np0005485008 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 13 11:29:01 np0005485008 systemd[1]: Starting PackageKit Daemon...
Oct 13 11:29:02 np0005485008 systemd[1]: Started PackageKit Daemon.
Oct 13 11:29:03 np0005485008 podman[132231]: 2025-10-13 15:29:03.812611962 +0000 UTC m=+0.106276153 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 13 11:29:05 np0005485008 podman[134323]: 2025-10-13 15:29:05.750736163 +0000 UTC m=+0.056733315 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 13 11:29:07 np0005485008 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 13 11:29:07 np0005485008 systemd[1]: Finished man-db-cache-update.service.
Oct 13 11:29:07 np0005485008 systemd[1]: man-db-cache-update.service: Consumed 10.330s CPU time.
Oct 13 11:29:07 np0005485008 systemd[1]: run-r81b0c8ed99c24b7e932e6d14ecd2760b.service: Deactivated successfully.
Oct 13 11:29:10 np0005485008 python3.9[136624]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 13 11:29:10 np0005485008 systemd[1]: Reloading.
Oct 13 11:29:10 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:29:10 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:29:11 np0005485008 python3.9[136814]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 13 11:29:11 np0005485008 systemd[1]: Reloading.
Oct 13 11:29:11 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:29:11 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:29:12 np0005485008 python3.9[137005]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 13 11:29:12 np0005485008 systemd[1]: Reloading.
Oct 13 11:29:12 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:29:12 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:29:13 np0005485008 python3.9[137195]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 13 11:29:13 np0005485008 systemd[1]: Reloading.
Oct 13 11:29:13 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:29:13 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:29:14 np0005485008 python3.9[137385]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 11:29:14 np0005485008 systemd[1]: Reloading.
Oct 13 11:29:14 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:29:14 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:29:15 np0005485008 python3.9[137574]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 11:29:16 np0005485008 systemd[1]: Reloading.
Oct 13 11:29:16 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:29:16 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:29:17 np0005485008 python3.9[137763]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 11:29:17 np0005485008 systemd[1]: Reloading.
Oct 13 11:29:17 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:29:17 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:29:18 np0005485008 python3.9[137954]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 11:29:19 np0005485008 python3.9[138109]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 11:29:19 np0005485008 systemd[1]: Reloading.
Oct 13 11:29:19 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:29:19 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:29:20 np0005485008 python3.9[138299]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 13 11:29:21 np0005485008 systemd[1]: Reloading.
Oct 13 11:29:21 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:29:21 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:29:22 np0005485008 systemd[1]: Listening on libvirt proxy daemon socket.
Oct 13 11:29:22 np0005485008 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct 13 11:29:22 np0005485008 python3.9[138494]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 11:29:23 np0005485008 python3.9[138649]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 11:29:24 np0005485008 python3.9[138804]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 11:29:25 np0005485008 python3.9[138959]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 11:29:26 np0005485008 python3.9[139114]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 11:29:27 np0005485008 python3.9[139269]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 11:29:27 np0005485008 python3.9[139424]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 11:29:28 np0005485008 python3.9[139579]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 11:29:29 np0005485008 python3.9[139734]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 11:29:30 np0005485008 python3.9[139889]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 11:29:31 np0005485008 python3.9[140044]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 11:29:31 np0005485008 python3.9[140199]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 11:29:32 np0005485008 python3.9[140354]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 11:29:33 np0005485008 python3.9[140509]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 13 11:29:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:29:33.925 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:29:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:29:33.927 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:29:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:29:33.928 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:29:34 np0005485008 podman[140513]: 2025-10-13 15:29:34.906397975 +0000 UTC m=+0.269111466 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 11:29:36 np0005485008 podman[140564]: 2025-10-13 15:29:36.752432048 +0000 UTC m=+0.054688412 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:29:38 np0005485008 python3.9[140711]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:29:38 np0005485008 python3.9[140863]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:29:39 np0005485008 python3.9[141015]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:29:40 np0005485008 python3.9[141167]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:29:41 np0005485008 python3.9[141319]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:29:42 np0005485008 python3.9[141471]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:29:43 np0005485008 python3.9[141623]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:29:44 np0005485008 python3.9[141748]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760369382.5543082-1095-269752668365490/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:29:45 np0005485008 python3.9[141900]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:29:45 np0005485008 python3.9[142025]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760369384.4468665-1095-50191014632091/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:29:46 np0005485008 python3.9[142177]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:29:47 np0005485008 python3.9[142302]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760369385.9262524-1095-215915686698141/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:29:47 np0005485008 python3.9[142454]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:29:48 np0005485008 python3.9[142579]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760369387.4145381-1095-266124228713733/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:29:49 np0005485008 python3.9[142731]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:29:49 np0005485008 python3.9[142856]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760369388.7795138-1095-246770689540381/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:29:50 np0005485008 python3.9[143008]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:29:51 np0005485008 python3.9[143133]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760369390.0991511-1095-10568509295698/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:29:51 np0005485008 python3.9[143285]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:29:52 np0005485008 python3.9[143408]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760369391.439658-1095-212411499084636/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:29:53 np0005485008 python3.9[143560]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:29:53 np0005485008 python3.9[143685]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760369392.7819536-1095-179822138916188/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:29:55 np0005485008 python3.9[143837]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct 13 11:29:56 np0005485008 python3.9[143990]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:29:56 np0005485008 python3.9[144142]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:29:57 np0005485008 python3.9[144294]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:29:58 np0005485008 python3.9[144446]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:29:58 np0005485008 python3.9[144598]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:29:59 np0005485008 python3.9[144750]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:00 np0005485008 python3.9[144902]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:01 np0005485008 python3.9[145054]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:01 np0005485008 python3.9[145206]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:02 np0005485008 python3.9[145358]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:03 np0005485008 python3.9[145510]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:03 np0005485008 python3.9[145662]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:04 np0005485008 python3.9[145814]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:05 np0005485008 podman[145938]: 2025-10-13 15:30:05.144188885 +0000 UTC m=+0.144184669 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 11:30:05 np0005485008 python3.9[145987]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:06 np0005485008 python3.9[146144]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:30:06 np0005485008 python3.9[146267]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369405.7032416-1537-248997721996670/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:07 np0005485008 podman[146391]: 2025-10-13 15:30:07.5042282 +0000 UTC m=+0.071423294 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 13 11:30:07 np0005485008 python3.9[146437]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:30:08 np0005485008 python3.9[146562]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369406.9865115-1537-161895080285226/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:08 np0005485008 python3.9[146714]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:30:09 np0005485008 python3.9[146837]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369408.482258-1537-100082343470870/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:10 np0005485008 python3.9[146989]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:30:10 np0005485008 python3.9[147112]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369409.8915842-1537-69255628474045/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:11 np0005485008 python3.9[147264]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:30:12 np0005485008 python3.9[147387]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369411.3049808-1537-244338825772749/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:13 np0005485008 python3.9[147539]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:30:13 np0005485008 python3.9[147662]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369412.5641377-1537-147569200087912/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:14 np0005485008 python3.9[147814]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:30:15 np0005485008 python3.9[147937]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369413.9860477-1537-211284980658492/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:15 np0005485008 python3.9[148089]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:30:16 np0005485008 python3.9[148212]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369415.454343-1537-28236798652248/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:17 np0005485008 python3.9[148364]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:30:17 np0005485008 python3.9[148487]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369416.774432-1537-206638604548722/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:18 np0005485008 python3.9[148639]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:30:19 np0005485008 python3.9[148762]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369418.1622605-1537-96030886890080/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:20 np0005485008 python3.9[148914]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:30:20 np0005485008 python3.9[149037]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369419.5269256-1537-158945298064533/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:21 np0005485008 python3.9[149189]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:30:22 np0005485008 python3.9[149312]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369420.8934798-1537-150681514013346/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:23 np0005485008 python3.9[149464]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:30:24 np0005485008 python3.9[149587]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369422.7888532-1537-8406303436287/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:25 np0005485008 python3.9[149739]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:30:26 np0005485008 python3.9[149862]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369424.47487-1537-281218793954266/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:27 np0005485008 python3.9[150012]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:30:28 np0005485008 python3.9[150167]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 13 11:30:30 np0005485008 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct 13 11:30:30 np0005485008 python3.9[150323]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:31 np0005485008 python3.9[150475]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:31 np0005485008 python3.9[150627]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:32 np0005485008 python3.9[150779]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:33 np0005485008 python3.9[150931]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:30:33.927 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:30:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:30:33.928 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:30:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:30:33.929 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:30:34 np0005485008 python3.9[151083]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:34 np0005485008 python3.9[151235]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:35 np0005485008 podman[151359]: 2025-10-13 15:30:35.373563242 +0000 UTC m=+0.122471896 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 13 11:30:35 np0005485008 python3.9[151401]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:36 np0005485008 python3.9[151562]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:36 np0005485008 python3.9[151714]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:37 np0005485008 podman[151835]: 2025-10-13 15:30:37.753428613 +0000 UTC m=+0.051281115 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 11:30:38 np0005485008 python3.9[151883]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 11:30:38 np0005485008 systemd[1]: Reloading.
Oct 13 11:30:38 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:30:38 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:30:38 np0005485008 systemd[1]: Starting libvirt logging daemon socket...
Oct 13 11:30:38 np0005485008 systemd[1]: Listening on libvirt logging daemon socket.
Oct 13 11:30:38 np0005485008 systemd[1]: Starting libvirt logging daemon admin socket...
Oct 13 11:30:38 np0005485008 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct 13 11:30:38 np0005485008 systemd[1]: Starting libvirt logging daemon...
Oct 13 11:30:38 np0005485008 systemd[1]: Started libvirt logging daemon.
Oct 13 11:30:39 np0005485008 python3.9[152076]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 11:30:39 np0005485008 systemd[1]: Reloading.
Oct 13 11:30:39 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:30:39 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:30:39 np0005485008 systemd[1]: Starting libvirt nodedev daemon socket...
Oct 13 11:30:39 np0005485008 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct 13 11:30:39 np0005485008 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct 13 11:30:39 np0005485008 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct 13 11:30:39 np0005485008 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct 13 11:30:39 np0005485008 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct 13 11:30:39 np0005485008 systemd[1]: Starting libvirt nodedev daemon...
Oct 13 11:30:39 np0005485008 systemd[1]: Started libvirt nodedev daemon.
Oct 13 11:30:40 np0005485008 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 13 11:30:40 np0005485008 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 13 11:30:40 np0005485008 python3.9[152292]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 11:30:40 np0005485008 systemd[1]: Reloading.
Oct 13 11:30:40 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:30:40 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:30:40 np0005485008 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct 13 11:30:40 np0005485008 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct 13 11:30:40 np0005485008 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct 13 11:30:40 np0005485008 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct 13 11:30:40 np0005485008 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct 13 11:30:40 np0005485008 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct 13 11:30:40 np0005485008 systemd[1]: Starting libvirt proxy daemon...
Oct 13 11:30:40 np0005485008 systemd[1]: Started libvirt proxy daemon.
Oct 13 11:30:41 np0005485008 setroubleshoot[152218]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 65014f3f-ee26-4cdf-8150-e6454e8a1be0
Oct 13 11:30:41 np0005485008 setroubleshoot[152218]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct 13 11:30:41 np0005485008 setroubleshoot[152218]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 65014f3f-ee26-4cdf-8150-e6454e8a1be0
Oct 13 11:30:41 np0005485008 python3.9[152510]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 11:30:41 np0005485008 setroubleshoot[152218]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct 13 11:30:41 np0005485008 systemd[1]: Reloading.
Oct 13 11:30:41 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:30:41 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:30:42 np0005485008 systemd[1]: Listening on libvirt locking daemon socket.
Oct 13 11:30:42 np0005485008 systemd[1]: Starting libvirt QEMU daemon socket...
Oct 13 11:30:42 np0005485008 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 13 11:30:42 np0005485008 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct 13 11:30:42 np0005485008 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct 13 11:30:42 np0005485008 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct 13 11:30:42 np0005485008 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct 13 11:30:42 np0005485008 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct 13 11:30:42 np0005485008 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct 13 11:30:42 np0005485008 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct 13 11:30:42 np0005485008 systemd[1]: Starting libvirt QEMU daemon...
Oct 13 11:30:42 np0005485008 systemd[1]: Started libvirt QEMU daemon.
Oct 13 11:30:43 np0005485008 python3.9[152724]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 11:30:43 np0005485008 systemd[1]: Reloading.
Oct 13 11:30:43 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:30:43 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:30:43 np0005485008 systemd[1]: Starting libvirt secret daemon socket...
Oct 13 11:30:43 np0005485008 systemd[1]: Listening on libvirt secret daemon socket.
Oct 13 11:30:43 np0005485008 systemd[1]: Starting libvirt secret daemon admin socket...
Oct 13 11:30:43 np0005485008 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct 13 11:30:43 np0005485008 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct 13 11:30:43 np0005485008 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct 13 11:30:43 np0005485008 systemd[1]: Starting libvirt secret daemon...
Oct 13 11:30:43 np0005485008 systemd[1]: Started libvirt secret daemon.
Oct 13 11:30:44 np0005485008 python3.9[152936]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:45 np0005485008 python3.9[153088]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 13 11:30:46 np0005485008 python3.9[153240]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:30:47 np0005485008 python3.9[153363]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760369445.6878786-2227-36418417869270/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:48 np0005485008 python3.9[153515]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:48 np0005485008 python3.9[153667]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:30:49 np0005485008 python3.9[153745]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:50 np0005485008 python3.9[153897]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:30:50 np0005485008 python3.9[153975]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.sx8ql06f recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:51 np0005485008 python3.9[154127]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:30:51 np0005485008 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct 13 11:30:51 np0005485008 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 13 11:30:52 np0005485008 python3.9[154205]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:53 np0005485008 python3.9[154357]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:30:54 np0005485008 python3[154510]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 13 11:30:55 np0005485008 python3.9[154662]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:30:55 np0005485008 python3.9[154740]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:56 np0005485008 python3.9[154892]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:30:56 np0005485008 python3.9[154970]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:57 np0005485008 python3.9[155122]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:30:58 np0005485008 python3.9[155200]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:30:59 np0005485008 python3.9[155352]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:30:59 np0005485008 python3.9[155430]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:31:00 np0005485008 python3.9[155582]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:31:01 np0005485008 python3.9[155707]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369459.9343834-2477-103000426990801/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:31:02 np0005485008 python3.9[155859]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:31:02 np0005485008 python3.9[156011]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:31:04 np0005485008 python3.9[156166]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:31:04 np0005485008 python3.9[156318]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:31:05 np0005485008 podman[156443]: 2025-10-13 15:31:05.590929963 +0000 UTC m=+0.100744333 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, tcib_managed=true)
Oct 13 11:31:05 np0005485008 python3.9[156488]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:31:06 np0005485008 python3.9[156652]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:31:07 np0005485008 python3.9[156807]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:31:08 np0005485008 podman[156931]: 2025-10-13 15:31:08.130020971 +0000 UTC m=+0.090650352 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 13 11:31:08 np0005485008 python3.9[156979]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:31:09 np0005485008 python3.9[157102]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760369467.7393646-2621-176086233235572/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:31:09 np0005485008 python3.9[157254]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:31:10 np0005485008 python3.9[157377]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760369469.2439647-2651-124557329773578/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:31:11 np0005485008 python3.9[157529]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:31:12 np0005485008 python3.9[157652]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760369470.8877182-2681-1827481176952/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:31:12 np0005485008 python3.9[157804]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:31:12 np0005485008 systemd[1]: Reloading.
Oct 13 11:31:13 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:31:13 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:31:13 np0005485008 systemd[1]: Reached target edpm_libvirt.target.
Oct 13 11:31:14 np0005485008 python3.9[157994]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 13 11:31:14 np0005485008 systemd[1]: Reloading.
Oct 13 11:31:14 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:31:14 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:31:14 np0005485008 systemd[1]: Reloading.
Oct 13 11:31:14 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:31:14 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:31:15 np0005485008 systemd[1]: session-25.scope: Deactivated successfully.
Oct 13 11:31:15 np0005485008 systemd[1]: session-25.scope: Consumed 3min 28.294s CPU time.
Oct 13 11:31:15 np0005485008 systemd-logind[784]: Session 25 logged out. Waiting for processes to exit.
Oct 13 11:31:15 np0005485008 systemd-logind[784]: Removed session 25.
Oct 13 11:31:20 np0005485008 systemd-logind[784]: New session 26 of user zuul.
Oct 13 11:31:20 np0005485008 systemd[1]: Started Session 26 of User zuul.
Oct 13 11:31:21 np0005485008 python3.9[158245]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:31:22 np0005485008 python3.9[158401]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:31:23 np0005485008 python3.9[158553]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:31:24 np0005485008 python3.9[158705]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:31:25 np0005485008 python3.9[158857]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 11:31:25 np0005485008 python3.9[159009]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:31:26 np0005485008 python3.9[159161]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:31:28 np0005485008 python3.9[159315]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:31:28 np0005485008 systemd[1]: Reloading.
Oct 13 11:31:28 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:31:28 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:31:29 np0005485008 python3.9[159503]: ansible-ansible.builtin.service_facts Invoked
Oct 13 11:31:29 np0005485008 network[159520]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 13 11:31:29 np0005485008 network[159521]: 'network-scripts' will be removed from distribution in near future.
Oct 13 11:31:29 np0005485008 network[159522]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 13 11:31:33 np0005485008 python3.9[159795]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:31:33 np0005485008 systemd[1]: Reloading.
Oct 13 11:31:33 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:31:33 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:31:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:31:33.929 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:31:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:31:33.931 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:31:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:31:33.931 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:31:34 np0005485008 python3.9[159981]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:31:35 np0005485008 python3.9[160133]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 13 11:31:35 np0005485008 podman[160134]: 2025-10-13 15:31:35.836820458 +0000 UTC m=+0.132768238 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:31:35 np0005485008 rsyslogd[1000]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 11:31:35 np0005485008 rsyslogd[1000]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 11:31:35 np0005485008 podman[160194]: 2025-10-13 15:31:35.985108431 +0000 UTC m=+0.047687751 container create 48e03a93ffe9849fe4dae5fdcf60ba9c844e6a4b157d3073df3e266b0b26e79f (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 13 11:31:36 np0005485008 NetworkManager[51587]: <info>  [1760369496.0113] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/20)
Oct 13 11:31:36 np0005485008 kernel: podman0: port 1(veth0) entered blocking state
Oct 13 11:31:36 np0005485008 kernel: podman0: port 1(veth0) entered disabled state
Oct 13 11:31:36 np0005485008 kernel: veth0: entered allmulticast mode
Oct 13 11:31:36 np0005485008 kernel: veth0: entered promiscuous mode
Oct 13 11:31:36 np0005485008 kernel: podman0: port 1(veth0) entered blocking state
Oct 13 11:31:36 np0005485008 kernel: podman0: port 1(veth0) entered forwarding state
Oct 13 11:31:36 np0005485008 NetworkManager[51587]: <info>  [1760369496.0304] device (veth0): carrier: link connected
Oct 13 11:31:36 np0005485008 NetworkManager[51587]: <info>  [1760369496.0311] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Oct 13 11:31:36 np0005485008 NetworkManager[51587]: <info>  [1760369496.0318] device (podman0): carrier: link connected
Oct 13 11:31:36 np0005485008 systemd-udevd[160223]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:31:36 np0005485008 systemd-udevd[160226]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:31:36 np0005485008 podman[160194]: 2025-10-13 15:31:35.964756395 +0000 UTC m=+0.027335735 image pull 7acf5363984cc8f102650810da36ae6f915a365c30bf42518548c6b195c5c57c quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 13 11:31:36 np0005485008 NetworkManager[51587]: <info>  [1760369496.0680] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 11:31:36 np0005485008 NetworkManager[51587]: <info>  [1760369496.0687] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 13 11:31:36 np0005485008 NetworkManager[51587]: <info>  [1760369496.0695] device (podman0): Activation: starting connection 'podman0' (8598841e-03f4-4a2b-a601-fb3dbfec810f)
Oct 13 11:31:36 np0005485008 NetworkManager[51587]: <info>  [1760369496.0696] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 13 11:31:36 np0005485008 NetworkManager[51587]: <info>  [1760369496.0698] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 13 11:31:36 np0005485008 NetworkManager[51587]: <info>  [1760369496.0699] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 13 11:31:36 np0005485008 NetworkManager[51587]: <info>  [1760369496.0701] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 13 11:31:36 np0005485008 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 13 11:31:36 np0005485008 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 13 11:31:36 np0005485008 NetworkManager[51587]: <info>  [1760369496.1042] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 13 11:31:36 np0005485008 NetworkManager[51587]: <info>  [1760369496.1045] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 13 11:31:36 np0005485008 NetworkManager[51587]: <info>  [1760369496.1053] device (podman0): Activation: successful, device activated.
Oct 13 11:31:36 np0005485008 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct 13 11:31:36 np0005485008 systemd[1]: Started libpod-conmon-48e03a93ffe9849fe4dae5fdcf60ba9c844e6a4b157d3073df3e266b0b26e79f.scope.
Oct 13 11:31:36 np0005485008 systemd[1]: Started libcrun container.
Oct 13 11:31:36 np0005485008 podman[160194]: 2025-10-13 15:31:36.368892521 +0000 UTC m=+0.431471911 container init 48e03a93ffe9849fe4dae5fdcf60ba9c844e6a4b157d3073df3e266b0b26e79f (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 13 11:31:36 np0005485008 podman[160194]: 2025-10-13 15:31:36.3829614 +0000 UTC m=+0.445540750 container start 48e03a93ffe9849fe4dae5fdcf60ba9c844e6a4b157d3073df3e266b0b26e79f (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:31:36 np0005485008 iscsid_config[160353]: iqn.1994-05.com.redhat:838175d4b274#015
Oct 13 11:31:36 np0005485008 systemd[1]: libpod-48e03a93ffe9849fe4dae5fdcf60ba9c844e6a4b157d3073df3e266b0b26e79f.scope: Deactivated successfully.
Oct 13 11:31:36 np0005485008 podman[160194]: 2025-10-13 15:31:36.38678126 +0000 UTC m=+0.449360580 container attach 48e03a93ffe9849fe4dae5fdcf60ba9c844e6a4b157d3073df3e266b0b26e79f (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 11:31:36 np0005485008 podman[160194]: 2025-10-13 15:31:36.391240499 +0000 UTC m=+0.453819819 container died 48e03a93ffe9849fe4dae5fdcf60ba9c844e6a4b157d3073df3e266b0b26e79f (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 11:31:36 np0005485008 kernel: podman0: port 1(veth0) entered disabled state
Oct 13 11:31:36 np0005485008 kernel: veth0 (unregistering): left allmulticast mode
Oct 13 11:31:36 np0005485008 kernel: veth0 (unregistering): left promiscuous mode
Oct 13 11:31:36 np0005485008 kernel: podman0: port 1(veth0) entered disabled state
Oct 13 11:31:36 np0005485008 NetworkManager[51587]: <info>  [1760369496.4499] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 11:31:36 np0005485008 systemd[1]: run-netns-netns\x2d541a439e\x2d3977\x2d5561\x2dbe4a\x2d04110f6bd613.mount: Deactivated successfully.
Oct 13 11:31:36 np0005485008 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-48e03a93ffe9849fe4dae5fdcf60ba9c844e6a4b157d3073df3e266b0b26e79f-userdata-shm.mount: Deactivated successfully.
Oct 13 11:31:36 np0005485008 systemd[1]: var-lib-containers-storage-overlay-e384cc013849bce9c586af4972371b28f1bfc49a0766d80cf466420ec825dfc9-merged.mount: Deactivated successfully.
Oct 13 11:31:36 np0005485008 podman[160194]: 2025-10-13 15:31:36.775104091 +0000 UTC m=+0.837683441 container remove 48e03a93ffe9849fe4dae5fdcf60ba9c844e6a4b157d3073df3e266b0b26e79f (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 11:31:36 np0005485008 python3.9[160133]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid:current-podified /usr/sbin/iscsi-iname
Oct 13 11:31:36 np0005485008 systemd[1]: libpod-conmon-48e03a93ffe9849fe4dae5fdcf60ba9c844e6a4b157d3073df3e266b0b26e79f.scope: Deactivated successfully.
Oct 13 11:31:36 np0005485008 python3.9[160133]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: #012DEPRECATED command:#012It is recommended to use Quadlets for running containers and pods under systemd.#012#012Please refer to podman-systemd.unit(5) for details.#012Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct 13 11:31:37 np0005485008 python3.9[160600]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:31:38 np0005485008 podman[160695]: 2025-10-13 15:31:38.49459687 +0000 UTC m=+0.095214956 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 13 11:31:38 np0005485008 python3.9[160742]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760369497.2770722-224-233268690681666/.source.iscsi _original_basename=.bdbif5co follow=False checksum=60710dc48846013fc70a3eff94bd4c9d1c6aae1c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:31:39 np0005485008 python3.9[160895]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:31:40 np0005485008 python3.9[161045]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:31:41 np0005485008 python3.9[161199]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:31:41 np0005485008 python3.9[161351]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:31:42 np0005485008 python3.9[161503]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:31:43 np0005485008 python3.9[161581]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:31:43 np0005485008 python3.9[161733]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:31:44 np0005485008 python3.9[161811]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:31:45 np0005485008 python3.9[161963]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:31:45 np0005485008 python3.9[162115]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:31:46 np0005485008 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 13 11:31:46 np0005485008 python3.9[162193]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:31:47 np0005485008 python3.9[162345]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:31:47 np0005485008 python3.9[162423]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:31:49 np0005485008 python3.9[162575]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:31:49 np0005485008 systemd[1]: Reloading.
Oct 13 11:31:49 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:31:49 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:31:50 np0005485008 python3.9[162765]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:31:50 np0005485008 python3.9[162843]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:31:51 np0005485008 python3.9[162995]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:31:52 np0005485008 python3.9[163073]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:31:52 np0005485008 python3.9[163225]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:31:52 np0005485008 systemd[1]: Reloading.
Oct 13 11:31:53 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:31:53 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:31:53 np0005485008 systemd[1]: Starting Create netns directory...
Oct 13 11:31:53 np0005485008 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 13 11:31:53 np0005485008 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 13 11:31:53 np0005485008 systemd[1]: Finished Create netns directory.
Oct 13 11:31:54 np0005485008 python3.9[163418]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:31:55 np0005485008 python3.9[163570]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:31:55 np0005485008 python3.9[163693]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760369514.6202915-533-267436306949317/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:31:56 np0005485008 python3.9[163845]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:31:57 np0005485008 python3.9[163997]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:31:57 np0005485008 python3.9[164120]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760369516.90827-582-236759114326746/.source.json _original_basename=.pqr4frrm follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:31:59 np0005485008 python3.9[164272]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:32:01 np0005485008 python3.9[164699]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct 13 11:32:02 np0005485008 python3.9[164851]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 13 11:32:03 np0005485008 python3.9[165003]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 13 11:32:05 np0005485008 python3[165181]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 13 11:32:05 np0005485008 podman[165217]: 2025-10-13 15:32:05.69938023 +0000 UTC m=+0.075864671 container create 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:32:05 np0005485008 podman[165217]: 2025-10-13 15:32:05.662752786 +0000 UTC m=+0.039237287 image pull 7acf5363984cc8f102650810da36ae6f915a365c30bf42518548c6b195c5c57c quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 13 11:32:05 np0005485008 python3[165181]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 13 11:32:06 np0005485008 podman[165379]: 2025-10-13 15:32:06.569654209 +0000 UTC m=+0.103284798 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:32:06 np0005485008 python3.9[165422]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:32:07 np0005485008 python3.9[165587]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:32:08 np0005485008 python3.9[165663]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:32:08 np0005485008 podman[165785]: 2025-10-13 15:32:08.786711761 +0000 UTC m=+0.071755952 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Oct 13 11:32:08 np0005485008 python3.9[165832]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760369528.25646-758-152579834446990/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:32:09 np0005485008 python3.9[165908]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 11:32:09 np0005485008 systemd[1]: Reloading.
Oct 13 11:32:09 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:32:09 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:32:10 np0005485008 python3.9[166019]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:32:10 np0005485008 systemd[1]: Reloading.
Oct 13 11:32:10 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:32:10 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:32:10 np0005485008 systemd[1]: Starting iscsid container...
Oct 13 11:32:11 np0005485008 systemd[1]: Started libcrun container.
Oct 13 11:32:11 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14be0c4b8c4e45fc0eff5958839fa8e5ba27967dfaf4c2ac255611d6f0e41c7/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 13 11:32:11 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14be0c4b8c4e45fc0eff5958839fa8e5ba27967dfaf4c2ac255611d6f0e41c7/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 13 11:32:11 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14be0c4b8c4e45fc0eff5958839fa8e5ba27967dfaf4c2ac255611d6f0e41c7/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 13 11:32:11 np0005485008 systemd[1]: Started /usr/bin/podman healthcheck run 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727.
Oct 13 11:32:11 np0005485008 podman[166059]: 2025-10-13 15:32:11.081047579 +0000 UTC m=+0.147813229 container init 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 11:32:11 np0005485008 iscsid[166075]: + sudo -E kolla_set_configs
Oct 13 11:32:11 np0005485008 podman[166059]: 2025-10-13 15:32:11.117150927 +0000 UTC m=+0.183916557 container start 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, org.label-schema.schema-version=1.0)
Oct 13 11:32:11 np0005485008 podman[166059]: iscsid
Oct 13 11:32:11 np0005485008 systemd[1]: Started iscsid container.
Oct 13 11:32:11 np0005485008 systemd[1]: Created slice User Slice of UID 0.
Oct 13 11:32:11 np0005485008 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 13 11:32:11 np0005485008 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 13 11:32:11 np0005485008 systemd[1]: Starting User Manager for UID 0...
Oct 13 11:32:11 np0005485008 podman[166082]: 2025-10-13 15:32:11.216926783 +0000 UTC m=+0.075654454 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 11:32:11 np0005485008 systemd[1]: 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727-44e12cdf09b51544.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 11:32:11 np0005485008 systemd[1]: 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727-44e12cdf09b51544.service: Failed with result 'exit-code'.
Oct 13 11:32:11 np0005485008 systemd[166102]: Queued start job for default target Main User Target.
Oct 13 11:32:11 np0005485008 systemd[166102]: Created slice User Application Slice.
Oct 13 11:32:11 np0005485008 systemd[166102]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 13 11:32:11 np0005485008 systemd[166102]: Started Daily Cleanup of User's Temporary Directories.
Oct 13 11:32:11 np0005485008 systemd[166102]: Reached target Paths.
Oct 13 11:32:11 np0005485008 systemd[166102]: Reached target Timers.
Oct 13 11:32:11 np0005485008 systemd[166102]: Starting D-Bus User Message Bus Socket...
Oct 13 11:32:11 np0005485008 systemd[166102]: Starting Create User's Volatile Files and Directories...
Oct 13 11:32:11 np0005485008 systemd[166102]: Listening on D-Bus User Message Bus Socket.
Oct 13 11:32:11 np0005485008 systemd[166102]: Reached target Sockets.
Oct 13 11:32:11 np0005485008 systemd[166102]: Finished Create User's Volatile Files and Directories.
Oct 13 11:32:11 np0005485008 systemd[166102]: Reached target Basic System.
Oct 13 11:32:11 np0005485008 systemd[166102]: Reached target Main User Target.
Oct 13 11:32:11 np0005485008 systemd[166102]: Startup finished in 141ms.
Oct 13 11:32:11 np0005485008 systemd[1]: Started User Manager for UID 0.
Oct 13 11:32:11 np0005485008 systemd[1]: Started Session c3 of User root.
Oct 13 11:32:11 np0005485008 iscsid[166075]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 13 11:32:11 np0005485008 iscsid[166075]: INFO:__main__:Validating config file
Oct 13 11:32:11 np0005485008 iscsid[166075]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 13 11:32:11 np0005485008 iscsid[166075]: INFO:__main__:Writing out command to execute
Oct 13 11:32:11 np0005485008 systemd[1]: session-c3.scope: Deactivated successfully.
Oct 13 11:32:11 np0005485008 iscsid[166075]: ++ cat /run_command
Oct 13 11:32:11 np0005485008 iscsid[166075]: + CMD='/usr/sbin/iscsid -f'
Oct 13 11:32:11 np0005485008 iscsid[166075]: + ARGS=
Oct 13 11:32:11 np0005485008 iscsid[166075]: + sudo kolla_copy_cacerts
Oct 13 11:32:11 np0005485008 systemd[1]: Started Session c4 of User root.
Oct 13 11:32:11 np0005485008 systemd[1]: session-c4.scope: Deactivated successfully.
Oct 13 11:32:11 np0005485008 iscsid[166075]: + [[ ! -n '' ]]
Oct 13 11:32:11 np0005485008 iscsid[166075]: + . kolla_extend_start
Oct 13 11:32:11 np0005485008 iscsid[166075]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct 13 11:32:11 np0005485008 iscsid[166075]: Running command: '/usr/sbin/iscsid -f'
Oct 13 11:32:11 np0005485008 iscsid[166075]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct 13 11:32:11 np0005485008 iscsid[166075]: + umask 0022
Oct 13 11:32:11 np0005485008 iscsid[166075]: + exec /usr/sbin/iscsid -f
Oct 13 11:32:11 np0005485008 kernel: Loading iSCSI transport class v2.0-870.
Oct 13 11:32:12 np0005485008 python3.9[166278]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:32:12 np0005485008 python3.9[166430]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:32:13 np0005485008 python3.9[166582]: ansible-ansible.builtin.service_facts Invoked
Oct 13 11:32:13 np0005485008 network[166599]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 13 11:32:13 np0005485008 network[166600]: 'network-scripts' will be removed from distribution in near future.
Oct 13 11:32:13 np0005485008 network[166601]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 13 11:32:19 np0005485008 python3.9[166875]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 11:32:20 np0005485008 python3.9[167027]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct 13 11:32:21 np0005485008 python3.9[167183]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:32:21 np0005485008 systemd[1]: Stopping User Manager for UID 0...
Oct 13 11:32:21 np0005485008 systemd[166102]: Activating special unit Exit the Session...
Oct 13 11:32:21 np0005485008 systemd[166102]: Stopped target Main User Target.
Oct 13 11:32:21 np0005485008 systemd[166102]: Stopped target Basic System.
Oct 13 11:32:21 np0005485008 systemd[166102]: Stopped target Paths.
Oct 13 11:32:21 np0005485008 systemd[166102]: Stopped target Sockets.
Oct 13 11:32:21 np0005485008 systemd[166102]: Stopped target Timers.
Oct 13 11:32:21 np0005485008 systemd[166102]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 13 11:32:21 np0005485008 systemd[166102]: Closed D-Bus User Message Bus Socket.
Oct 13 11:32:21 np0005485008 systemd[166102]: Stopped Create User's Volatile Files and Directories.
Oct 13 11:32:21 np0005485008 systemd[166102]: Removed slice User Application Slice.
Oct 13 11:32:21 np0005485008 systemd[166102]: Reached target Shutdown.
Oct 13 11:32:21 np0005485008 systemd[166102]: Finished Exit the Session.
Oct 13 11:32:21 np0005485008 systemd[166102]: Reached target Exit the Session.
Oct 13 11:32:21 np0005485008 systemd[1]: user@0.service: Deactivated successfully.
Oct 13 11:32:21 np0005485008 systemd[1]: Stopped User Manager for UID 0.
Oct 13 11:32:21 np0005485008 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 13 11:32:21 np0005485008 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 13 11:32:21 np0005485008 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 13 11:32:21 np0005485008 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 13 11:32:21 np0005485008 systemd[1]: Removed slice User Slice of UID 0.
Oct 13 11:32:22 np0005485008 python3.9[167307]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760369540.6351602-906-17226336901179/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:32:22 np0005485008 python3.9[167459]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:32:24 np0005485008 python3.9[167611]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 11:32:24 np0005485008 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 13 11:32:24 np0005485008 systemd[1]: Stopped Load Kernel Modules.
Oct 13 11:32:24 np0005485008 systemd[1]: Stopping Load Kernel Modules...
Oct 13 11:32:24 np0005485008 systemd[1]: Starting Load Kernel Modules...
Oct 13 11:32:24 np0005485008 systemd[1]: Finished Load Kernel Modules.
Oct 13 11:32:24 np0005485008 python3.9[167767]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:32:25 np0005485008 python3.9[167919]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:32:26 np0005485008 python3.9[168071]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:32:27 np0005485008 python3.9[168223]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:32:28 np0005485008 python3.9[168346]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760369546.8998773-1022-272523465525391/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:32:29 np0005485008 python3.9[168498]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:32:29 np0005485008 python3.9[168651]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:32:30 np0005485008 python3.9[168803]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:32:31 np0005485008 python3.9[168955]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:32:32 np0005485008 python3.9[169107]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:32:33 np0005485008 python3.9[169259]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:32:33 np0005485008 python3.9[169411]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:32:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:32:33.930 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:32:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:32:33.932 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:32:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:32:33.933 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:32:34 np0005485008 python3.9[169563]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:32:35 np0005485008 python3.9[169715]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:32:36 np0005485008 python3.9[169869]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:32:36 np0005485008 podman[169917]: 2025-10-13 15:32:36.837371133 +0000 UTC m=+0.138244876 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Oct 13 11:32:37 np0005485008 python3.9[170047]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:32:38 np0005485008 python3.9[170199]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:32:38 np0005485008 python3.9[170277]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:32:39 np0005485008 podman[170401]: 2025-10-13 15:32:39.048233797 +0000 UTC m=+0.066639511 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:32:39 np0005485008 python3.9[170448]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:32:39 np0005485008 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct 13 11:32:39 np0005485008 python3.9[170526]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:32:40 np0005485008 python3.9[170679]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:32:40 np0005485008 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 13 11:32:41 np0005485008 python3.9[170832]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:32:41 np0005485008 podman[170882]: 2025-10-13 15:32:41.765388299 +0000 UTC m=+0.071696620 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 11:32:41 np0005485008 python3.9[170930]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:32:42 np0005485008 systemd[1]: virtqemud.service: Deactivated successfully.
Oct 13 11:32:42 np0005485008 python3.9[171083]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:32:43 np0005485008 python3.9[171161]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:32:43 np0005485008 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 13 11:32:44 np0005485008 python3.9[171314]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:32:44 np0005485008 systemd[1]: Reloading.
Oct 13 11:32:44 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:32:44 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:32:45 np0005485008 python3.9[171503]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:32:45 np0005485008 python3.9[171581]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:32:46 np0005485008 python3.9[171733]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:32:47 np0005485008 python3.9[171811]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:32:48 np0005485008 python3.9[171963]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:32:48 np0005485008 systemd[1]: Reloading.
Oct 13 11:32:48 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:32:48 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:32:48 np0005485008 systemd[1]: Starting Create netns directory...
Oct 13 11:32:48 np0005485008 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 13 11:32:48 np0005485008 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 13 11:32:48 np0005485008 systemd[1]: Finished Create netns directory.
Oct 13 11:32:49 np0005485008 python3.9[172157]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:32:50 np0005485008 python3.9[172309]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:32:50 np0005485008 python3.9[172432]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760369569.6800802-1436-222036247785435/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:32:52 np0005485008 python3.9[172584]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:32:52 np0005485008 python3.9[172736]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:32:53 np0005485008 python3.9[172859]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760369572.364809-1486-197839035606117/.source.json _original_basename=.9ed062qi follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:32:54 np0005485008 python3.9[173011]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:32:56 np0005485008 python3.9[173438]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct 13 11:32:57 np0005485008 python3.9[173590]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 13 11:32:58 np0005485008 python3.9[173742]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 13 11:32:59 np0005485008 python3[173920]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 13 11:33:00 np0005485008 podman[173955]: 2025-10-13 15:33:00.032105133 +0000 UTC m=+0.069361126 container create 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 11:33:00 np0005485008 podman[173955]: 2025-10-13 15:32:59.999071933 +0000 UTC m=+0.036327926 image pull 7042d0e4c063a84abce3ee29396c85a102ad504e82c1a0963682094dbdd1cf87 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 13 11:33:00 np0005485008 python3[173920]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 13 11:33:01 np0005485008 python3.9[174146]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:33:01 np0005485008 python3.9[174300]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:33:02 np0005485008 python3.9[174376]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:33:03 np0005485008 python3.9[174527]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760369582.3836508-1662-203729013245020/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:33:03 np0005485008 python3.9[174603]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 11:33:03 np0005485008 systemd[1]: Reloading.
Oct 13 11:33:03 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:33:03 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:33:04 np0005485008 python3.9[174714]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:33:04 np0005485008 systemd[1]: Reloading.
Oct 13 11:33:04 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:33:04 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:33:05 np0005485008 systemd[1]: Starting multipathd container...
Oct 13 11:33:05 np0005485008 systemd[1]: Started libcrun container.
Oct 13 11:33:05 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/317b8dbde4f37a457e3291464b85f955052071370f6fc57e42b24fd6d7163927/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 13 11:33:05 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/317b8dbde4f37a457e3291464b85f955052071370f6fc57e42b24fd6d7163927/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 13 11:33:05 np0005485008 systemd[1]: Started /usr/bin/podman healthcheck run 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e.
Oct 13 11:33:05 np0005485008 podman[174753]: 2025-10-13 15:33:05.191560629 +0000 UTC m=+0.123053917 container init 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 11:33:05 np0005485008 multipathd[174769]: + sudo -E kolla_set_configs
Oct 13 11:33:05 np0005485008 podman[174753]: 2025-10-13 15:33:05.222150233 +0000 UTC m=+0.153643511 container start 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 11:33:05 np0005485008 podman[174753]: multipathd
Oct 13 11:33:05 np0005485008 systemd[1]: Started multipathd container.
Oct 13 11:33:05 np0005485008 multipathd[174769]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 13 11:33:05 np0005485008 multipathd[174769]: INFO:__main__:Validating config file
Oct 13 11:33:05 np0005485008 multipathd[174769]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 13 11:33:05 np0005485008 multipathd[174769]: INFO:__main__:Writing out command to execute
Oct 13 11:33:05 np0005485008 multipathd[174769]: ++ cat /run_command
Oct 13 11:33:05 np0005485008 multipathd[174769]: + CMD='/usr/sbin/multipathd -d'
Oct 13 11:33:05 np0005485008 multipathd[174769]: + ARGS=
Oct 13 11:33:05 np0005485008 multipathd[174769]: + sudo kolla_copy_cacerts
Oct 13 11:33:05 np0005485008 multipathd[174769]: + [[ ! -n '' ]]
Oct 13 11:33:05 np0005485008 multipathd[174769]: + . kolla_extend_start
Oct 13 11:33:05 np0005485008 multipathd[174769]: Running command: '/usr/sbin/multipathd -d'
Oct 13 11:33:05 np0005485008 multipathd[174769]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 13 11:33:05 np0005485008 multipathd[174769]: + umask 0022
Oct 13 11:33:05 np0005485008 multipathd[174769]: + exec /usr/sbin/multipathd -d
Oct 13 11:33:05 np0005485008 podman[174776]: 2025-10-13 15:33:05.32999809 +0000 UTC m=+0.093578819 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:33:05 np0005485008 multipathd[174769]: 3123.059332 | --------start up--------
Oct 13 11:33:05 np0005485008 multipathd[174769]: 3123.059351 | read /etc/multipath.conf
Oct 13 11:33:05 np0005485008 systemd[1]: 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e-73c63977b3ee0ec1.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 11:33:05 np0005485008 systemd[1]: 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e-73c63977b3ee0ec1.service: Failed with result 'exit-code'.
Oct 13 11:33:05 np0005485008 multipathd[174769]: 3123.064767 | path checkers start up
Oct 13 11:33:06 np0005485008 python3.9[174958]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:33:06 np0005485008 python3.9[175112]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:33:07 np0005485008 podman[175249]: 2025-10-13 15:33:07.757140416 +0000 UTC m=+0.133930670 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2)
Oct 13 11:33:08 np0005485008 python3.9[175298]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 11:33:08 np0005485008 systemd[1]: Stopping multipathd container...
Oct 13 11:33:08 np0005485008 multipathd[174769]: 3125.845709 | exit (signal)
Oct 13 11:33:08 np0005485008 multipathd[174769]: 3125.845828 | --------shut down-------
Oct 13 11:33:08 np0005485008 systemd[1]: libpod-0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e.scope: Deactivated successfully.
Oct 13 11:33:08 np0005485008 podman[175308]: 2025-10-13 15:33:08.151930372 +0000 UTC m=+0.075801928 container died 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 13 11:33:08 np0005485008 systemd[1]: 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e-73c63977b3ee0ec1.timer: Deactivated successfully.
Oct 13 11:33:08 np0005485008 systemd[1]: Stopped /usr/bin/podman healthcheck run 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e.
Oct 13 11:33:08 np0005485008 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e-userdata-shm.mount: Deactivated successfully.
Oct 13 11:33:08 np0005485008 systemd[1]: var-lib-containers-storage-overlay-317b8dbde4f37a457e3291464b85f955052071370f6fc57e42b24fd6d7163927-merged.mount: Deactivated successfully.
Oct 13 11:33:08 np0005485008 podman[175308]: 2025-10-13 15:33:08.18706538 +0000 UTC m=+0.110936936 container cleanup 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 11:33:08 np0005485008 podman[175308]: multipathd
Oct 13 11:33:08 np0005485008 podman[175335]: multipathd
Oct 13 11:33:08 np0005485008 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct 13 11:33:08 np0005485008 systemd[1]: Stopped multipathd container.
Oct 13 11:33:08 np0005485008 systemd[1]: Starting multipathd container...
Oct 13 11:33:08 np0005485008 systemd[1]: Started libcrun container.
Oct 13 11:33:08 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/317b8dbde4f37a457e3291464b85f955052071370f6fc57e42b24fd6d7163927/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 13 11:33:08 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/317b8dbde4f37a457e3291464b85f955052071370f6fc57e42b24fd6d7163927/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 13 11:33:08 np0005485008 systemd[1]: Started /usr/bin/podman healthcheck run 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e.
Oct 13 11:33:08 np0005485008 podman[175348]: 2025-10-13 15:33:08.479443919 +0000 UTC m=+0.179856056 container init 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3)
Oct 13 11:33:08 np0005485008 multipathd[175363]: + sudo -E kolla_set_configs
Oct 13 11:33:08 np0005485008 podman[175348]: 2025-10-13 15:33:08.516999423 +0000 UTC m=+0.217411530 container start 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:33:08 np0005485008 podman[175348]: multipathd
Oct 13 11:33:08 np0005485008 systemd[1]: Started multipathd container.
Oct 13 11:33:08 np0005485008 multipathd[175363]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 13 11:33:08 np0005485008 multipathd[175363]: INFO:__main__:Validating config file
Oct 13 11:33:08 np0005485008 multipathd[175363]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 13 11:33:08 np0005485008 multipathd[175363]: INFO:__main__:Writing out command to execute
Oct 13 11:33:08 np0005485008 multipathd[175363]: ++ cat /run_command
Oct 13 11:33:08 np0005485008 multipathd[175363]: + CMD='/usr/sbin/multipathd -d'
Oct 13 11:33:08 np0005485008 multipathd[175363]: + ARGS=
Oct 13 11:33:08 np0005485008 multipathd[175363]: + sudo kolla_copy_cacerts
Oct 13 11:33:08 np0005485008 podman[175370]: 2025-10-13 15:33:08.613115771 +0000 UTC m=+0.078739282 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:33:08 np0005485008 systemd[1]: 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e-6eb63e6a28db6d8f.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 11:33:08 np0005485008 systemd[1]: 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e-6eb63e6a28db6d8f.service: Failed with result 'exit-code'.
Oct 13 11:33:08 np0005485008 multipathd[175363]: + [[ ! -n '' ]]
Oct 13 11:33:08 np0005485008 multipathd[175363]: + . kolla_extend_start
Oct 13 11:33:08 np0005485008 multipathd[175363]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 13 11:33:08 np0005485008 multipathd[175363]: Running command: '/usr/sbin/multipathd -d'
Oct 13 11:33:08 np0005485008 multipathd[175363]: + umask 0022
Oct 13 11:33:08 np0005485008 multipathd[175363]: + exec /usr/sbin/multipathd -d
Oct 13 11:33:08 np0005485008 multipathd[175363]: 3126.370758 | --------start up--------
Oct 13 11:33:08 np0005485008 multipathd[175363]: 3126.370779 | read /etc/multipath.conf
Oct 13 11:33:08 np0005485008 multipathd[175363]: 3126.377188 | path checkers start up
Oct 13 11:33:09 np0005485008 podman[175527]: 2025-10-13 15:33:09.164580493 +0000 UTC m=+0.064280246 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:33:09 np0005485008 python3.9[175572]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:33:10 np0005485008 python3.9[175724]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 13 11:33:11 np0005485008 python3.9[175876]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct 13 11:33:11 np0005485008 kernel: Key type psk registered
Oct 13 11:33:11 np0005485008 podman[176009]: 2025-10-13 15:33:11.986892188 +0000 UTC m=+0.070342037 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 11:33:12 np0005485008 python3.9[176057]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:33:12 np0005485008 python3.9[176180]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760369591.621407-1822-196536803991072/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:33:13 np0005485008 python3.9[176332]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:33:14 np0005485008 python3.9[176484]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 11:33:14 np0005485008 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 13 11:33:14 np0005485008 systemd[1]: Stopped Load Kernel Modules.
Oct 13 11:33:14 np0005485008 systemd[1]: Stopping Load Kernel Modules...
Oct 13 11:33:14 np0005485008 systemd[1]: Starting Load Kernel Modules...
Oct 13 11:33:14 np0005485008 systemd[1]: Finished Load Kernel Modules.
Oct 13 11:33:15 np0005485008 python3.9[176640]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 13 11:33:16 np0005485008 python3.9[176724]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 13 11:33:23 np0005485008 systemd[1]: Reloading.
Oct 13 11:33:23 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:33:23 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:33:23 np0005485008 systemd[1]: Reloading.
Oct 13 11:33:23 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:33:23 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:33:23 np0005485008 systemd-logind[784]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 13 11:33:23 np0005485008 systemd-logind[784]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 13 11:33:23 np0005485008 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 13 11:33:23 np0005485008 systemd[1]: Starting man-db-cache-update.service...
Oct 13 11:33:23 np0005485008 systemd[1]: Reloading.
Oct 13 11:33:24 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:33:24 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:33:24 np0005485008 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 13 11:33:25 np0005485008 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 13 11:33:25 np0005485008 systemd[1]: Finished man-db-cache-update.service.
Oct 13 11:33:25 np0005485008 systemd[1]: man-db-cache-update.service: Consumed 1.536s CPU time.
Oct 13 11:33:25 np0005485008 systemd[1]: run-rd8d545e123104d12808cb0fa1de17be9.service: Deactivated successfully.
Oct 13 11:33:25 np0005485008 python3.9[178176]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:33:26 np0005485008 python3.9[178326]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:33:27 np0005485008 python3.9[178482]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:33:29 np0005485008 python3.9[178634]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 11:33:29 np0005485008 systemd[1]: Reloading.
Oct 13 11:33:29 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:33:29 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:33:30 np0005485008 python3.9[178818]: ansible-ansible.builtin.service_facts Invoked
Oct 13 11:33:30 np0005485008 network[178835]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 13 11:33:30 np0005485008 network[178836]: 'network-scripts' will be removed from distribution in near future.
Oct 13 11:33:30 np0005485008 network[178837]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 13 11:33:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:33:33.932 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:33:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:33:33.934 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:33:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:33:33.934 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:33:34 np0005485008 python3.9[179114]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:33:35 np0005485008 python3.9[179267]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:33:36 np0005485008 python3.9[179420]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:33:37 np0005485008 python3.9[179573]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:33:37 np0005485008 podman[179726]: 2025-10-13 15:33:37.952309334 +0000 UTC m=+0.123144105 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 11:33:38 np0005485008 python3.9[179727]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:33:38 np0005485008 podman[179877]: 2025-10-13 15:33:38.708828977 +0000 UTC m=+0.059492785 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 13 11:33:39 np0005485008 python3.9[179925]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:33:39 np0005485008 podman[180051]: 2025-10-13 15:33:39.510963043 +0000 UTC m=+0.058066078 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 11:33:39 np0005485008 python3.9[180098]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:33:40 np0005485008 python3.9[180252]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:33:41 np0005485008 python3.9[180405]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:33:42 np0005485008 podman[180529]: 2025-10-13 15:33:42.138960475 +0000 UTC m=+0.057687086 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 13 11:33:42 np0005485008 python3.9[180571]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:33:43 np0005485008 python3.9[180727]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:33:43 np0005485008 python3.9[180879]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:33:44 np0005485008 python3.9[181031]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:33:45 np0005485008 python3.9[181183]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:33:45 np0005485008 python3.9[181335]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:33:46 np0005485008 python3.9[181487]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:33:47 np0005485008 python3.9[181639]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:33:48 np0005485008 python3.9[181791]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:33:49 np0005485008 python3.9[181943]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:33:49 np0005485008 python3.9[182095]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:33:50 np0005485008 python3.9[182247]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:33:51 np0005485008 python3.9[182399]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:33:51 np0005485008 python3.9[182551]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:33:52 np0005485008 python3.9[182703]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:33:53 np0005485008 python3.9[182855]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:33:54 np0005485008 python3.9[183007]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 13 11:33:55 np0005485008 python3.9[183159]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 11:33:55 np0005485008 systemd[1]: Reloading.
Oct 13 11:33:55 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:33:55 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:33:56 np0005485008 python3.9[183345]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:33:57 np0005485008 python3.9[183498]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:33:57 np0005485008 python3.9[183651]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:33:58 np0005485008 python3.9[183804]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:33:59 np0005485008 python3.9[183957]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:34:00 np0005485008 python3.9[184110]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:34:00 np0005485008 python3.9[184263]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:34:01 np0005485008 python3.9[184416]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:34:04 np0005485008 python3.9[184569]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:34:04 np0005485008 python3.9[184721]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:34:05 np0005485008 python3.9[184873]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:34:06 np0005485008 python3.9[185025]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:34:07 np0005485008 python3.9[185177]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:34:07 np0005485008 python3.9[185329]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:34:08 np0005485008 podman[185453]: 2025-10-13 15:34:08.464673594 +0000 UTC m=+0.117865193 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 11:34:08 np0005485008 python3.9[185494]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:34:09 np0005485008 podman[185632]: 2025-10-13 15:34:09.142323603 +0000 UTC m=+0.064053323 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 13 11:34:09 np0005485008 python3.9[185680]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:34:09 np0005485008 podman[185781]: 2025-10-13 15:34:09.774823305 +0000 UTC m=+0.075835656 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:34:10 np0005485008 python3.9[185851]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:34:10 np0005485008 python3.9[186004]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:34:11 np0005485008 python3.9[186156]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:34:12 np0005485008 python3.9[186308]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:34:12 np0005485008 podman[186309]: 2025-10-13 15:34:12.29340973 +0000 UTC m=+0.093487340 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 13 11:34:17 np0005485008 python3.9[186482]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct 13 11:34:18 np0005485008 python3.9[186635]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 13 11:34:19 np0005485008 python3.9[186793]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 13 11:34:20 np0005485008 systemd-logind[784]: New session 28 of user zuul.
Oct 13 11:34:20 np0005485008 systemd[1]: Started Session 28 of User zuul.
Oct 13 11:34:20 np0005485008 systemd[1]: session-28.scope: Deactivated successfully.
Oct 13 11:34:20 np0005485008 systemd-logind[784]: Session 28 logged out. Waiting for processes to exit.
Oct 13 11:34:20 np0005485008 systemd-logind[784]: Removed session 28.
Oct 13 11:34:21 np0005485008 python3.9[186979]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:34:21 np0005485008 python3.9[187100]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760369660.8150318-2939-64686585388666/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:34:22 np0005485008 python3.9[187250]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:34:22 np0005485008 python3.9[187326]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:34:23 np0005485008 python3.9[187476]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:34:24 np0005485008 python3.9[187597]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760369663.1039813-2939-248227290038428/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:34:24 np0005485008 python3.9[187747]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:34:25 np0005485008 python3.9[187868]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760369664.3090255-2939-146717221792700/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=bc7f3bb7d4094c596a18178a888511b54e157ba4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:34:26 np0005485008 python3.9[188018]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:34:26 np0005485008 python3.9[188139]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760369665.593732-2939-258097743041708/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:34:27 np0005485008 python3.9[188291]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:34:28 np0005485008 python3.9[188443]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:34:29 np0005485008 python3.9[188595]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:34:29 np0005485008 python3.9[188747]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:34:30 np0005485008 python3.9[188870]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1760369669.4802969-3125-116387907336664/.source _original_basename=.agm6xvew follow=False checksum=89f4f5d769ecd2f55a3a0192b07e1e7dece48280 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct 13 11:34:31 np0005485008 python3.9[189022]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:34:32 np0005485008 python3.9[189174]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:34:32 np0005485008 python3.9[189295]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760369671.7677183-3177-61659822939020/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=f022386746472553146d29f689b545df70fa8a60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:34:33 np0005485008 python3.9[189445]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:34:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:34:33.933 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:34:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:34:33.934 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:34:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:34:33.934 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:34:34 np0005485008 python3.9[189566]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760369673.1446943-3207-108705479356147/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:34:35 np0005485008 python3.9[189718]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct 13 11:34:36 np0005485008 python3.9[189870]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 13 11:34:37 np0005485008 python3[190022]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct 13 11:34:37 np0005485008 podman[190059]: 2025-10-13 15:34:37.321588345 +0000 UTC m=+0.053745998 container create 1b4d5f7d53d0445e21107978c62c747e10936a56b2bdb1a84afd10526fcf2a41 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm)
Oct 13 11:34:37 np0005485008 podman[190059]: 2025-10-13 15:34:37.293815973 +0000 UTC m=+0.025973646 image pull 97abb4e5d6eb812c6abde306e15dbdde9dbba5ef5cd42ad11b83abc055914569 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 13 11:34:37 np0005485008 python3[190022]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct 13 11:34:38 np0005485008 python3.9[190250]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:34:38 np0005485008 podman[190277]: 2025-10-13 15:34:38.839914785 +0000 UTC m=+0.135271709 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 13 11:34:39 np0005485008 podman[190403]: 2025-10-13 15:34:39.287447054 +0000 UTC m=+0.071918859 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 11:34:39 np0005485008 python3.9[190450]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct 13 11:34:40 np0005485008 podman[190575]: 2025-10-13 15:34:40.135182913 +0000 UTC m=+0.055773124 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 11:34:40 np0005485008 python3.9[190622]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 13 11:34:41 np0005485008 python3[190774]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 13 11:34:41 np0005485008 podman[190811]: 2025-10-13 15:34:41.621044037 +0000 UTC m=+0.056196994 container create 907bcc45fc9859d353a6588046f809c83e9e8c20e54ec85d7391c717b4e39261 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:34:41 np0005485008 podman[190811]: 2025-10-13 15:34:41.590368003 +0000 UTC m=+0.025521000 image pull 97abb4e5d6eb812c6abde306e15dbdde9dbba5ef5cd42ad11b83abc055914569 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 13 11:34:41 np0005485008 python3[190774]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Oct 13 11:34:42 np0005485008 podman[190973]: 2025-10-13 15:34:42.433301112 +0000 UTC m=+0.077275764 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:34:42 np0005485008 python3.9[191021]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:34:43 np0005485008 python3.9[191175]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:34:44 np0005485008 python3.9[191326]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760369683.5342152-3391-109561983115586/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:34:44 np0005485008 python3.9[191402]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 11:34:44 np0005485008 systemd[1]: Reloading.
Oct 13 11:34:45 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:34:45 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:34:45 np0005485008 python3.9[191515]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:34:45 np0005485008 systemd[1]: Reloading.
Oct 13 11:34:46 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:34:46 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:34:46 np0005485008 systemd[1]: Starting nova_compute container...
Oct 13 11:34:46 np0005485008 systemd[1]: Started libcrun container.
Oct 13 11:34:46 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f6c21e527631321e0490147eb51cb4cedc32b415cdecf45ed9b201a2944e48d/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 13 11:34:46 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f6c21e527631321e0490147eb51cb4cedc32b415cdecf45ed9b201a2944e48d/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 13 11:34:46 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f6c21e527631321e0490147eb51cb4cedc32b415cdecf45ed9b201a2944e48d/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 11:34:46 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f6c21e527631321e0490147eb51cb4cedc32b415cdecf45ed9b201a2944e48d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 11:34:46 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f6c21e527631321e0490147eb51cb4cedc32b415cdecf45ed9b201a2944e48d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 13 11:34:46 np0005485008 podman[191555]: 2025-10-13 15:34:46.36302897 +0000 UTC m=+0.119239489 container init 907bcc45fc9859d353a6588046f809c83e9e8c20e54ec85d7391c717b4e39261 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=nova_compute)
Oct 13 11:34:46 np0005485008 podman[191555]: 2025-10-13 15:34:46.374352119 +0000 UTC m=+0.130562618 container start 907bcc45fc9859d353a6588046f809c83e9e8c20e54ec85d7391c717b4e39261 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=nova_compute)
Oct 13 11:34:46 np0005485008 podman[191555]: nova_compute
Oct 13 11:34:46 np0005485008 systemd[1]: Started nova_compute container.
Oct 13 11:34:46 np0005485008 nova_compute[191570]: + sudo -E kolla_set_configs
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Validating config file
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Copying service configuration files
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Deleting /etc/ceph
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Creating directory /etc/ceph
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Setting permission for /etc/ceph
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Writing out command to execute
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 13 11:34:46 np0005485008 nova_compute[191570]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 13 11:34:46 np0005485008 nova_compute[191570]: ++ cat /run_command
Oct 13 11:34:46 np0005485008 nova_compute[191570]: + CMD=nova-compute
Oct 13 11:34:46 np0005485008 nova_compute[191570]: + ARGS=
Oct 13 11:34:46 np0005485008 nova_compute[191570]: + sudo kolla_copy_cacerts
Oct 13 11:34:46 np0005485008 nova_compute[191570]: + [[ ! -n '' ]]
Oct 13 11:34:46 np0005485008 nova_compute[191570]: + . kolla_extend_start
Oct 13 11:34:46 np0005485008 nova_compute[191570]: + echo 'Running command: '\''nova-compute'\'''
Oct 13 11:34:46 np0005485008 nova_compute[191570]: Running command: 'nova-compute'
Oct 13 11:34:46 np0005485008 nova_compute[191570]: + umask 0022
Oct 13 11:34:46 np0005485008 nova_compute[191570]: + exec nova-compute
Oct 13 11:34:47 np0005485008 python3.9[191732]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:34:48 np0005485008 nova_compute[191570]: 2025-10-13 15:34:48.481 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 13 11:34:48 np0005485008 nova_compute[191570]: 2025-10-13 15:34:48.481 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 13 11:34:48 np0005485008 nova_compute[191570]: 2025-10-13 15:34:48.481 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 13 11:34:48 np0005485008 nova_compute[191570]: 2025-10-13 15:34:48.482 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct 13 11:34:48 np0005485008 nova_compute[191570]: 2025-10-13 15:34:48.610 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:34:48 np0005485008 nova_compute[191570]: 2025-10-13 15:34:48.641 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:34:48 np0005485008 python3.9[191884]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.305 2 INFO nova.virt.driver [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.450 2 INFO nova.compute.provider_config [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct 13 11:34:49 np0005485008 python3.9[192036]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.509 2 DEBUG oslo_concurrency.lockutils [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.510 2 DEBUG oslo_concurrency.lockutils [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.510 2 DEBUG oslo_concurrency.lockutils [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.511 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.511 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.511 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.511 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.512 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.512 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.512 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.512 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.513 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.513 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.513 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.513 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.514 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.514 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.514 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.514 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.515 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.515 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.515 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.515 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.516 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.516 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.516 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.516 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.517 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.517 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.517 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.517 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.518 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.518 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.518 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.519 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.519 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.519 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.519 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.520 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.520 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.520 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.520 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.521 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.521 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.521 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.521 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.522 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.522 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.522 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.522 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.523 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.523 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.523 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.523 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.524 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.524 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.524 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.524 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.525 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.525 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.525 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.525 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.526 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.526 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.526 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.526 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.527 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.527 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.527 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.527 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.527 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.528 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.528 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.528 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.528 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.529 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.529 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.529 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.529 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.530 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.530 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.530 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.530 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.531 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.531 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.531 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.531 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.532 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.532 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.532 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.532 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.533 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.533 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.533 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.533 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.534 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.534 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.534 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.534 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.535 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.535 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.535 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.535 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.535 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.536 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.536 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.536 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.536 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.537 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.537 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.537 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.537 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.538 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.538 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.538 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.538 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.539 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.539 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.539 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.539 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.540 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.540 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.540 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.540 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.541 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.541 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.541 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.541 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.542 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.542 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.542 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.542 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.543 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.543 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.543 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.543 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.544 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.544 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.544 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.544 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.545 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.545 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.545 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.545 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.545 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.546 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.546 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.546 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.546 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.547 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.547 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.547 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.547 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.548 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.548 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.548 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.548 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.549 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.549 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.549 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.550 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.550 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.550 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.550 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.551 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.551 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.551 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.551 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.551 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.552 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.552 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.552 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.552 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.553 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.553 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.553 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.553 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.554 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.554 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.554 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.554 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.555 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.555 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.555 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.556 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.556 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.556 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.556 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.557 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.557 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.557 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.557 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.558 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.558 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.558 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.558 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.558 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.559 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.559 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.559 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.559 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.560 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.560 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.560 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.560 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.560 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.560 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.561 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.561 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.561 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.561 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.561 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.562 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.562 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.562 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.562 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.562 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.562 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.563 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.563 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.563 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.563 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.563 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.564 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.564 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.564 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.564 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.564 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.565 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.565 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.565 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.565 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.565 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.565 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.566 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.566 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.566 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.566 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.566 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.567 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.567 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.567 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.567 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.567 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.568 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.568 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.568 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.568 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.568 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.569 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.569 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.569 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.569 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.569 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.569 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.570 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.570 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.570 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.570 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.570 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.571 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.571 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.571 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.571 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.571 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.572 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.572 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.572 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.572 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.572 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.573 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.573 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.573 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.573 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.573 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.573 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.574 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.574 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.574 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.574 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.574 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.575 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.575 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.575 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.575 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.575 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.576 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.576 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.576 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.576 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.576 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.577 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.577 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.577 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.577 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.577 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.577 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.578 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.578 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.578 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.578 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.578 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.579 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.579 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.579 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.579 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.579 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.580 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.580 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.580 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.580 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.580 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.581 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.581 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.581 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.581 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.581 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.581 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.582 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.582 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.582 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.582 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.582 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.583 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.583 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.583 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.583 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.583 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.583 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.584 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.584 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.584 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.584 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.584 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.585 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.585 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.585 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.585 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.585 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.585 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.586 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.586 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.586 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.586 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.586 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.587 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.587 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.587 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.587 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.587 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.587 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.588 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.588 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.588 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.588 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.588 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.589 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.589 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.589 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.589 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.590 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.590 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.590 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.590 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.590 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.591 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.591 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.591 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.591 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.591 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.591 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.592 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.592 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.592 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.592 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.592 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.593 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.593 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.593 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.593 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.593 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.593 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.594 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.594 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.594 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.594 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.594 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.595 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.595 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.595 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.595 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.595 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.596 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.596 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.596 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.596 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.596 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.596 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.597 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.597 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.597 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.597 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.597 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.598 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.598 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.598 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.598 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.598 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.599 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.599 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.599 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.600 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.600 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.600 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.600 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.600 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.601 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.601 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.601 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.602 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.602 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.602 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.602 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.603 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.603 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.603 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.603 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.604 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.604 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.604 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.604 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.605 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.605 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.605 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.605 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.606 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.606 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.606 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.607 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.607 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.607 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.607 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.608 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.608 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.608 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.609 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.609 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.609 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.609 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.610 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.610 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.610 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.611 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.611 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.611 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.611 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.612 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.612 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.612 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.613 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.613 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.613 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.613 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.614 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.614 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.614 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.615 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.615 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.615 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.616 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.616 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.616 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.617 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.617 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.617 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.617 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.618 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.618 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.618 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.619 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.619 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.619 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.619 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.620 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.620 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.620 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.620 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.620 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.621 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.621 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.621 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.621 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.621 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.621 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.622 2 WARNING oslo_config.cfg [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 13 11:34:49 np0005485008 nova_compute[191570]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 13 11:34:49 np0005485008 nova_compute[191570]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 13 11:34:49 np0005485008 nova_compute[191570]: and ``live_migration_inbound_addr`` respectively.
Oct 13 11:34:49 np0005485008 nova_compute[191570]: ).  Its value may be silently ignored in the future.#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.622 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.622 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.622 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.623 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.623 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.623 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.623 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.623 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.624 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.624 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.624 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.624 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.625 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.625 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.625 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.625 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.625 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.626 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.626 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.626 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.626 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.626 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.626 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.627 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.627 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.627 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.627 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.627 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.628 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.628 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.628 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.628 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.628 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.629 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.629 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.629 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.629 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.629 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.630 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.630 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.630 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.630 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.630 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.631 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.631 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.631 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.631 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.631 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.632 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.632 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.632 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.632 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.632 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.633 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.633 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.633 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.633 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.633 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.633 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.634 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.634 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.634 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.634 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.634 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.635 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.635 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.635 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.635 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.635 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.635 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.636 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.636 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.636 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.636 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.636 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.636 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.637 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.637 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.637 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.637 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.637 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.638 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.638 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.638 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.638 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.638 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.639 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.639 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.639 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.639 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.639 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.639 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.640 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.640 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.640 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.640 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.640 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.641 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.641 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.641 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.641 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.641 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.641 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.642 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.642 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.642 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.642 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.642 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.642 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.643 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.643 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.643 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.643 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.643 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.644 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.644 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.644 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.644 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.644 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.644 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.645 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.645 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.645 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.645 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.645 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.646 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.646 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.646 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.646 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.646 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.647 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.647 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.647 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.647 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.647 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.647 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.648 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.648 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.648 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.648 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.649 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.649 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.649 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.649 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.649 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.649 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.650 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.650 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.650 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.650 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.650 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.651 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.651 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.651 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.651 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.651 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.651 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.652 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.652 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.652 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.652 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.652 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.653 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.653 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.653 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.653 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.653 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.654 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.654 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.654 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.654 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.654 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.654 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.655 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.655 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.655 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.655 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.655 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.656 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.656 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.656 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.656 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.656 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.657 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.657 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.657 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.657 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.657 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.658 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.658 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.658 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.658 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.658 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.658 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.659 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.659 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.659 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.659 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.660 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.660 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.660 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.660 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.660 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.660 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.661 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.661 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.661 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.661 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.661 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.662 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.662 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.662 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.662 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.662 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.662 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.663 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.663 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.663 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.663 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.663 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.664 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.664 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.664 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.664 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.664 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.664 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.665 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.665 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.665 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.665 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.665 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.666 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.666 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.666 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.666 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.666 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.667 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.667 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.667 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.667 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.667 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.667 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.668 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.668 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.668 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.668 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.668 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.669 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.669 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.669 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.669 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.670 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.670 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.670 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.670 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.671 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.671 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.671 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.671 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.671 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.672 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.672 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.672 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.672 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.672 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.672 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.673 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.673 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.673 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.673 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.673 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.674 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.674 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.674 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.674 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.674 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.674 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.675 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.675 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.675 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.675 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.675 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.676 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.676 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.676 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.676 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.676 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.677 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.677 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.677 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.677 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.677 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.678 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.678 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.678 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.678 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.678 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.679 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.679 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.679 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.679 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.679 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.680 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.680 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.680 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.680 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.680 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.681 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.681 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.681 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.681 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.681 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.681 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.682 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.682 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.682 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.682 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.682 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.683 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.683 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.683 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.683 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.683 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.683 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.684 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.684 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.684 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.684 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.684 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.685 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.685 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.685 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.685 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.685 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.685 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.686 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.686 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.686 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.686 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.686 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.687 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.687 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.687 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.687 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.687 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.688 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.688 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.688 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.688 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.688 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.688 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.689 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.689 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.689 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.689 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.689 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.690 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.690 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.690 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.690 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.690 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.690 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.691 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.691 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.691 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.691 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.691 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.692 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.692 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.692 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.692 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.692 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.692 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.693 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.693 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.693 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.693 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.693 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.694 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.694 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.694 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.694 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.694 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.694 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.695 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.695 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.695 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.695 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.695 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.696 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.696 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.696 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.696 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.696 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.697 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.697 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.697 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.697 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.697 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.698 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.698 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.698 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.698 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.698 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.698 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.699 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.699 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.699 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.699 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.699 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.700 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.700 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.700 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.700 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.700 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.700 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.701 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.701 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.701 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.701 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.701 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.701 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.702 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.702 2 DEBUG oslo_service.service [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.703 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.725 2 DEBUG nova.virt.libvirt.host [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.726 2 DEBUG nova.virt.libvirt.host [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.726 2 DEBUG nova.virt.libvirt.host [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.727 2 DEBUG nova.virt.libvirt.host [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct 13 11:34:49 np0005485008 systemd[1]: Starting libvirt QEMU daemon...
Oct 13 11:34:49 np0005485008 systemd[1]: Started libvirt QEMU daemon.
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.802 2 DEBUG nova.virt.libvirt.host [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f3d38dc9790> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.804 2 DEBUG nova.virt.libvirt.host [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f3d38dc9790> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.805 2 INFO nova.virt.libvirt.driver [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Connection event '1' reason 'None'#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.828 2 WARNING nova.virt.libvirt.driver [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Cannot update service status on host "compute-1.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Oct 13 11:34:49 np0005485008 nova_compute[191570]: 2025-10-13 15:34:49.829 2 DEBUG nova.virt.libvirt.volume.mount [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct 13 11:34:50 np0005485008 python3.9[192240]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 13 11:34:50 np0005485008 rsyslogd[1000]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 11:34:50 np0005485008 nova_compute[191570]: 2025-10-13 15:34:50.726 2 INFO nova.virt.libvirt.host [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Libvirt host capabilities <capabilities>
Oct 13 11:34:50 np0005485008 nova_compute[191570]: 
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <host>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <uuid>48631c7f-32de-410b-ace1-0785fa2d7327</uuid>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <cpu>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <arch>x86_64</arch>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model>EPYC-Rome-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <vendor>AMD</vendor>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <microcode version='16777317'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <signature family='23' model='49' stepping='0'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <maxphysaddr mode='emulate' bits='40'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature name='x2apic'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature name='tsc-deadline'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature name='osxsave'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature name='hypervisor'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature name='tsc_adjust'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature name='spec-ctrl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature name='stibp'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature name='arch-capabilities'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature name='ssbd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature name='cmp_legacy'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature name='topoext'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature name='virt-ssbd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature name='lbrv'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature name='tsc-scale'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature name='vmcb-clean'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature name='pause-filter'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature name='pfthreshold'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature name='svme-addr-chk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature name='rdctl-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature name='skip-l1dfl-vmentry'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature name='mds-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature name='pschange-mc-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <pages unit='KiB' size='4'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <pages unit='KiB' size='2048'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <pages unit='KiB' size='1048576'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </cpu>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <power_management>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <suspend_mem/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <suspend_disk/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <suspend_hybrid/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </power_management>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <iommu support='no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <migration_features>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <live/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <uri_transports>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <uri_transport>tcp</uri_transport>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <uri_transport>rdma</uri_transport>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </uri_transports>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </migration_features>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <topology>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <cells num='1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <cell id='0'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:          <memory unit='KiB'>7864352</memory>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:          <pages unit='KiB' size='4'>1966088</pages>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:          <pages unit='KiB' size='2048'>0</pages>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:          <pages unit='KiB' size='1048576'>0</pages>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:          <distances>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:            <sibling id='0' value='10'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:          </distances>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:          <cpus num='8'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:          </cpus>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        </cell>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </cells>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </topology>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <cache>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </cache>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <secmodel>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model>selinux</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <doi>0</doi>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </secmodel>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <secmodel>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model>dac</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <doi>0</doi>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </secmodel>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  </host>
Oct 13 11:34:50 np0005485008 nova_compute[191570]: 
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <guest>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <os_type>hvm</os_type>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <arch name='i686'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <wordsize>32</wordsize>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <domain type='qemu'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <domain type='kvm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </arch>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <features>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <pae/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <nonpae/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <acpi default='on' toggle='yes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <apic default='on' toggle='no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <cpuselection/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <deviceboot/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <disksnapshot default='on' toggle='no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <externalSnapshot/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </features>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  </guest>
Oct 13 11:34:50 np0005485008 nova_compute[191570]: 
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <guest>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <os_type>hvm</os_type>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <arch name='x86_64'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <wordsize>64</wordsize>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <domain type='qemu'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <domain type='kvm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </arch>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <features>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <acpi default='on' toggle='yes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <apic default='on' toggle='no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <cpuselection/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <deviceboot/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <disksnapshot default='on' toggle='no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <externalSnapshot/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </features>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  </guest>
Oct 13 11:34:50 np0005485008 nova_compute[191570]: 
Oct 13 11:34:50 np0005485008 nova_compute[191570]: </capabilities>
Oct 13 11:34:50 np0005485008 nova_compute[191570]: #033[00m
Oct 13 11:34:50 np0005485008 nova_compute[191570]: 2025-10-13 15:34:50.735 2 DEBUG nova.virt.libvirt.host [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct 13 11:34:50 np0005485008 nova_compute[191570]: 2025-10-13 15:34:50.764 2 DEBUG nova.virt.libvirt.host [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 13 11:34:50 np0005485008 nova_compute[191570]: <domainCapabilities>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <path>/usr/libexec/qemu-kvm</path>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <domain>kvm</domain>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <arch>i686</arch>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <vcpu max='240'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <iothreads supported='yes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <os supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <enum name='firmware'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <loader supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='type'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>rom</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>pflash</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='readonly'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>yes</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>no</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='secure'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>no</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </loader>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  </os>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <cpu>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <mode name='host-passthrough' supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='hostPassthroughMigratable'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>on</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>off</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </mode>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <mode name='maximum' supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='maximumMigratable'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>on</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>off</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </mode>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <mode name='host-model' supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <vendor>AMD</vendor>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='x2apic'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='tsc-deadline'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='hypervisor'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='tsc_adjust'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='spec-ctrl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='stibp'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='arch-capabilities'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='ssbd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='cmp_legacy'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='overflow-recov'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='succor'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='ibrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='amd-ssbd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='virt-ssbd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='lbrv'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='tsc-scale'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='vmcb-clean'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='flushbyasid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='pause-filter'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='pfthreshold'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='svme-addr-chk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='rdctl-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='mds-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='pschange-mc-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='gds-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='rfds-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='disable' name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </mode>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <mode name='custom' supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-noTSX'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server-v5'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cooperlake'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cooperlake-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cooperlake-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Denverton'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mpx'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Denverton-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mpx'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Denverton-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Denverton-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Dhyana-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Genoa'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amd-psfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='auto-ibrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='no-nested-data-bp'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='null-sel-clr-base'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='stibp-always-on'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Genoa-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amd-psfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='auto-ibrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='no-nested-data-bp'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='null-sel-clr-base'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='stibp-always-on'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Milan'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Milan-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Milan-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amd-psfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='no-nested-data-bp'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='null-sel-clr-base'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='stibp-always-on'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Rome'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Rome-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Rome-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Rome-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='GraniteRapids'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mcdt-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pbrsb-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='prefetchiti'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='GraniteRapids-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mcdt-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pbrsb-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='prefetchiti'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='GraniteRapids-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx10'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx10-128'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx10-256'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx10-512'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mcdt-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pbrsb-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='prefetchiti'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-noTSX'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-noTSX'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v5'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v6'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v7'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='IvyBridge'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='IvyBridge-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='IvyBridge-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='IvyBridge-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='KnightsMill'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-4fmaps'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-4vnniw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512er'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512pf'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='KnightsMill-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-4fmaps'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-4vnniw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512er'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512pf'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Opteron_G4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fma4'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xop'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Opteron_G4-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fma4'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xop'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Opteron_G5'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fma4'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tbm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xop'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Opteron_G5-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fma4'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tbm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xop'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='SapphireRapids'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='SapphireRapids-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='SapphireRapids-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='SapphireRapids-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='SierraForest'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-ne-convert'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cmpccxadd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mcdt-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pbrsb-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='SierraForest-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-ne-convert'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cmpccxadd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mcdt-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pbrsb-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-v5'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Snowridge'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='core-capability'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mpx'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='split-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Snowridge-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='core-capability'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mpx'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='split-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Snowridge-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='core-capability'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='split-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Snowridge-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='core-capability'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='split-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Snowridge-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='athlon'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='3dnow'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='3dnowext'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='athlon-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='3dnow'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='3dnowext'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='core2duo'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='core2duo-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='coreduo'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='coreduo-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='n270'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='n270-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='phenom'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='3dnow'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='3dnowext'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='phenom-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='3dnow'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='3dnowext'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </mode>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  </cpu>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <memoryBacking supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <enum name='sourceType'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <value>file</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <value>anonymous</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <value>memfd</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  </memoryBacking>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <devices>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <disk supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='diskDevice'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>disk</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>cdrom</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>floppy</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>lun</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='bus'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>ide</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>fdc</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>scsi</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtio</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>usb</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>sata</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='model'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtio</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtio-transitional</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtio-non-transitional</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </disk>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <graphics supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='type'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>vnc</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>egl-headless</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>dbus</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </graphics>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <video supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='modelType'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>vga</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>cirrus</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtio</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>none</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>bochs</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>ramfb</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </video>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <hostdev supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='mode'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>subsystem</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='startupPolicy'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>default</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>mandatory</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>requisite</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>optional</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='subsysType'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>usb</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>pci</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>scsi</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='capsType'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='pciBackend'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </hostdev>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <rng supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='model'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtio</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtio-transitional</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtio-non-transitional</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='backendModel'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>random</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>egd</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>builtin</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </rng>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <filesystem supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='driverType'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>path</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>handle</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtiofs</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </filesystem>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <tpm supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='model'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>tpm-tis</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>tpm-crb</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='backendModel'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>emulator</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>external</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='backendVersion'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>2.0</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </tpm>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <redirdev supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='bus'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>usb</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </redirdev>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <channel supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='type'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>pty</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>unix</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </channel>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <crypto supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='model'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='type'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>qemu</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='backendModel'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>builtin</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </crypto>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <interface supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='backendType'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>default</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>passt</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </interface>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <panic supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='model'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>isa</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>hyperv</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </panic>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  </devices>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <features>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <gic supported='no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <vmcoreinfo supported='yes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <genid supported='yes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <backingStoreInput supported='yes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <backup supported='yes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <async-teardown supported='yes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <ps2 supported='yes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <sev supported='no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <sgx supported='no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <hyperv supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='features'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>relaxed</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>vapic</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>spinlocks</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>vpindex</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>runtime</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>synic</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>stimer</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>reset</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>vendor_id</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>frequencies</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>reenlightenment</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>tlbflush</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>ipi</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>avic</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>emsr_bitmap</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>xmm_input</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </hyperv>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <launchSecurity supported='no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  </features>
Oct 13 11:34:50 np0005485008 nova_compute[191570]: </domainCapabilities>
Oct 13 11:34:50 np0005485008 nova_compute[191570]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 13 11:34:50 np0005485008 nova_compute[191570]: 2025-10-13 15:34:50.771 2 DEBUG nova.virt.libvirt.host [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 13 11:34:50 np0005485008 nova_compute[191570]: <domainCapabilities>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <path>/usr/libexec/qemu-kvm</path>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <domain>kvm</domain>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <machine>pc-q35-rhel9.6.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <arch>i686</arch>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <vcpu max='4096'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <iothreads supported='yes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <os supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <enum name='firmware'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <loader supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='type'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>rom</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>pflash</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='readonly'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>yes</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>no</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='secure'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>no</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </loader>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  </os>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <cpu>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <mode name='host-passthrough' supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='hostPassthroughMigratable'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>on</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>off</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </mode>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <mode name='maximum' supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='maximumMigratable'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>on</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>off</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </mode>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <mode name='host-model' supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <vendor>AMD</vendor>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='x2apic'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='tsc-deadline'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='hypervisor'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='tsc_adjust'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='spec-ctrl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='stibp'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='arch-capabilities'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='ssbd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='cmp_legacy'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='overflow-recov'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='succor'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='ibrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='amd-ssbd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='virt-ssbd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='lbrv'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='tsc-scale'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='vmcb-clean'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='flushbyasid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='pause-filter'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='pfthreshold'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='svme-addr-chk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='rdctl-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='mds-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='pschange-mc-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='gds-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='rfds-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='disable' name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </mode>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <mode name='custom' supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-noTSX'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server-v5'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cooperlake'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cooperlake-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cooperlake-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Denverton'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mpx'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Denverton-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mpx'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Denverton-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Denverton-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Dhyana-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Genoa'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amd-psfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='auto-ibrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='no-nested-data-bp'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='null-sel-clr-base'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='stibp-always-on'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Genoa-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amd-psfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='auto-ibrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='no-nested-data-bp'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='null-sel-clr-base'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='stibp-always-on'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Milan'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Milan-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Milan-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amd-psfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='no-nested-data-bp'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='null-sel-clr-base'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='stibp-always-on'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Rome'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Rome-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Rome-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Rome-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='GraniteRapids'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mcdt-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pbrsb-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='prefetchiti'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='GraniteRapids-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mcdt-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pbrsb-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='prefetchiti'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='GraniteRapids-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx10'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx10-128'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx10-256'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx10-512'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mcdt-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pbrsb-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='prefetchiti'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-noTSX'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-noTSX'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v5'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v6'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v7'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='IvyBridge'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='IvyBridge-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='IvyBridge-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='IvyBridge-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='KnightsMill'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-4fmaps'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-4vnniw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512er'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512pf'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='KnightsMill-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-4fmaps'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-4vnniw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512er'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512pf'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Opteron_G4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fma4'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xop'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Opteron_G4-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fma4'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xop'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Opteron_G5'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fma4'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tbm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xop'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Opteron_G5-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fma4'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tbm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xop'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='SapphireRapids'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='SapphireRapids-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='SapphireRapids-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='SapphireRapids-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='SierraForest'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-ne-convert'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cmpccxadd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mcdt-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pbrsb-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='SierraForest-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-ne-convert'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cmpccxadd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mcdt-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pbrsb-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-v5'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Snowridge'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='core-capability'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mpx'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='split-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Snowridge-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='core-capability'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mpx'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='split-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Snowridge-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='core-capability'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='split-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Snowridge-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='core-capability'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='split-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Snowridge-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='athlon'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='3dnow'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='3dnowext'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='athlon-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='3dnow'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='3dnowext'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='core2duo'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='core2duo-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='coreduo'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='coreduo-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='n270'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='n270-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='phenom'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='3dnow'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='3dnowext'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='phenom-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='3dnow'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='3dnowext'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </mode>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  </cpu>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <memoryBacking supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <enum name='sourceType'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <value>file</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <value>anonymous</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <value>memfd</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  </memoryBacking>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <devices>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <disk supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='diskDevice'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>disk</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>cdrom</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>floppy</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>lun</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='bus'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>fdc</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>scsi</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtio</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>usb</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>sata</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='model'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtio</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtio-transitional</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtio-non-transitional</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </disk>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <graphics supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='type'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>vnc</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>egl-headless</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>dbus</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </graphics>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <video supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='modelType'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>vga</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>cirrus</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtio</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>none</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>bochs</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>ramfb</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </video>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <hostdev supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='mode'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>subsystem</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='startupPolicy'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>default</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>mandatory</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>requisite</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>optional</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='subsysType'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>usb</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>pci</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>scsi</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='capsType'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='pciBackend'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </hostdev>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <rng supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='model'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtio</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtio-transitional</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtio-non-transitional</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='backendModel'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>random</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>egd</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>builtin</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </rng>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <filesystem supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='driverType'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>path</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>handle</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtiofs</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </filesystem>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <tpm supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='model'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>tpm-tis</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>tpm-crb</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='backendModel'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>emulator</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>external</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='backendVersion'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>2.0</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </tpm>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <redirdev supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='bus'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>usb</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </redirdev>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <channel supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='type'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>pty</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>unix</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </channel>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <crypto supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='model'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='type'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>qemu</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='backendModel'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>builtin</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </crypto>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <interface supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='backendType'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>default</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>passt</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </interface>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <panic supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='model'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>isa</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>hyperv</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </panic>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  </devices>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <features>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <gic supported='no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <vmcoreinfo supported='yes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <genid supported='yes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <backingStoreInput supported='yes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <backup supported='yes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <async-teardown supported='yes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <ps2 supported='yes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <sev supported='no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <sgx supported='no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <hyperv supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='features'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>relaxed</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>vapic</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>spinlocks</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>vpindex</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>runtime</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>synic</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>stimer</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>reset</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>vendor_id</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>frequencies</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>reenlightenment</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>tlbflush</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>ipi</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>avic</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>emsr_bitmap</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>xmm_input</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </hyperv>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <launchSecurity supported='no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  </features>
Oct 13 11:34:50 np0005485008 nova_compute[191570]: </domainCapabilities>
Oct 13 11:34:50 np0005485008 nova_compute[191570]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 13 11:34:50 np0005485008 nova_compute[191570]: 2025-10-13 15:34:50.812 2 DEBUG nova.virt.libvirt.host [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct 13 11:34:50 np0005485008 nova_compute[191570]: 2025-10-13 15:34:50.818 2 DEBUG nova.virt.libvirt.host [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 13 11:34:50 np0005485008 nova_compute[191570]: <domainCapabilities>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <path>/usr/libexec/qemu-kvm</path>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <domain>kvm</domain>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <arch>x86_64</arch>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <vcpu max='240'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <iothreads supported='yes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <os supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <enum name='firmware'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <loader supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='type'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>rom</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>pflash</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='readonly'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>yes</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>no</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='secure'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>no</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </loader>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  </os>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <cpu>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <mode name='host-passthrough' supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='hostPassthroughMigratable'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>on</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>off</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </mode>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <mode name='maximum' supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='maximumMigratable'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>on</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>off</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </mode>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <mode name='host-model' supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <vendor>AMD</vendor>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='x2apic'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='tsc-deadline'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='hypervisor'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='tsc_adjust'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='spec-ctrl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='stibp'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='arch-capabilities'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='ssbd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='cmp_legacy'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='overflow-recov'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='succor'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='ibrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='amd-ssbd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='virt-ssbd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='lbrv'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='tsc-scale'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='vmcb-clean'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='flushbyasid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='pause-filter'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='pfthreshold'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='svme-addr-chk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='rdctl-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='mds-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='pschange-mc-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='gds-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='rfds-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='disable' name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </mode>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <mode name='custom' supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-noTSX'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server-v5'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cooperlake'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cooperlake-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cooperlake-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Denverton'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mpx'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Denverton-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mpx'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Denverton-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Denverton-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Dhyana-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Genoa'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amd-psfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='auto-ibrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='no-nested-data-bp'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='null-sel-clr-base'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='stibp-always-on'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Genoa-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amd-psfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='auto-ibrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='no-nested-data-bp'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='null-sel-clr-base'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='stibp-always-on'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Milan'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Milan-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Milan-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amd-psfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='no-nested-data-bp'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='null-sel-clr-base'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='stibp-always-on'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Rome'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Rome-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Rome-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Rome-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='GraniteRapids'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mcdt-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pbrsb-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='prefetchiti'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='GraniteRapids-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mcdt-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pbrsb-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='prefetchiti'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='GraniteRapids-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx10'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx10-128'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx10-256'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx10-512'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mcdt-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pbrsb-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='prefetchiti'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-noTSX'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-noTSX'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v5'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v6'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v7'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='IvyBridge'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='IvyBridge-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='IvyBridge-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='IvyBridge-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='KnightsMill'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-4fmaps'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-4vnniw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512er'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512pf'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='KnightsMill-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-4fmaps'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-4vnniw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512er'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512pf'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Opteron_G4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fma4'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xop'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Opteron_G4-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fma4'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xop'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Opteron_G5'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fma4'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tbm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xop'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Opteron_G5-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fma4'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tbm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xop'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='SapphireRapids'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='SapphireRapids-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='SapphireRapids-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='SapphireRapids-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='SierraForest'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-ne-convert'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cmpccxadd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mcdt-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pbrsb-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='SierraForest-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-ne-convert'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cmpccxadd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mcdt-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pbrsb-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-v5'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Snowridge'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='core-capability'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mpx'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='split-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Snowridge-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='core-capability'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mpx'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='split-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Snowridge-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='core-capability'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='split-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Snowridge-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='core-capability'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='split-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Snowridge-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='athlon'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='3dnow'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='3dnowext'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='athlon-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='3dnow'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='3dnowext'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='core2duo'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='core2duo-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='coreduo'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='coreduo-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='n270'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='n270-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='phenom'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='3dnow'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='3dnowext'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='phenom-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='3dnow'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='3dnowext'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </mode>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  </cpu>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <memoryBacking supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <enum name='sourceType'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <value>file</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <value>anonymous</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <value>memfd</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  </memoryBacking>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <devices>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <disk supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='diskDevice'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>disk</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>cdrom</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>floppy</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>lun</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='bus'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>ide</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>fdc</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>scsi</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtio</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>usb</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>sata</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='model'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtio</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtio-transitional</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtio-non-transitional</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </disk>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <graphics supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='type'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>vnc</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>egl-headless</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>dbus</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </graphics>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <video supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='modelType'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>vga</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>cirrus</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtio</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>none</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>bochs</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>ramfb</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </video>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <hostdev supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='mode'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>subsystem</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='startupPolicy'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>default</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>mandatory</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>requisite</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>optional</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='subsysType'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>usb</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>pci</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>scsi</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='capsType'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='pciBackend'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </hostdev>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <rng supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='model'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtio</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtio-transitional</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtio-non-transitional</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='backendModel'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>random</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>egd</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>builtin</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </rng>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <filesystem supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='driverType'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>path</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>handle</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>virtiofs</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </filesystem>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <tpm supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='model'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>tpm-tis</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>tpm-crb</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='backendModel'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>emulator</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>external</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='backendVersion'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>2.0</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </tpm>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <redirdev supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='bus'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>usb</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </redirdev>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <channel supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='type'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>pty</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>unix</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </channel>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <crypto supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='model'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='type'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>qemu</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='backendModel'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>builtin</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </crypto>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <interface supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='backendType'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>default</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>passt</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </interface>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <panic supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='model'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>isa</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>hyperv</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </panic>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  </devices>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <features>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <gic supported='no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <vmcoreinfo supported='yes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <genid supported='yes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <backingStoreInput supported='yes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <backup supported='yes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <async-teardown supported='yes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <ps2 supported='yes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <sev supported='no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <sgx supported='no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <hyperv supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='features'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>relaxed</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>vapic</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>spinlocks</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>vpindex</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>runtime</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>synic</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>stimer</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>reset</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>vendor_id</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>frequencies</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>reenlightenment</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>tlbflush</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>ipi</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>avic</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>emsr_bitmap</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>xmm_input</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </hyperv>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <launchSecurity supported='no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  </features>
Oct 13 11:34:50 np0005485008 nova_compute[191570]: </domainCapabilities>
Oct 13 11:34:50 np0005485008 nova_compute[191570]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 13 11:34:50 np0005485008 nova_compute[191570]: 2025-10-13 15:34:50.885 2 DEBUG nova.virt.libvirt.host [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 13 11:34:50 np0005485008 nova_compute[191570]: <domainCapabilities>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <path>/usr/libexec/qemu-kvm</path>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <domain>kvm</domain>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <machine>pc-q35-rhel9.6.0</machine>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <arch>x86_64</arch>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <vcpu max='4096'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <iothreads supported='yes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <os supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <enum name='firmware'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <value>efi</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <loader supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='type'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>rom</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>pflash</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='readonly'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>yes</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>no</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='secure'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>yes</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>no</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </loader>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  </os>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:  <cpu>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <mode name='host-passthrough' supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='hostPassthroughMigratable'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>on</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>off</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </mode>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <mode name='maximum' supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <enum name='maximumMigratable'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>on</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <value>off</value>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </mode>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <mode name='host-model' supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <vendor>AMD</vendor>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='x2apic'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='tsc-deadline'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='hypervisor'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='tsc_adjust'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='spec-ctrl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='stibp'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='arch-capabilities'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='ssbd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='cmp_legacy'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='overflow-recov'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='succor'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='ibrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='amd-ssbd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='virt-ssbd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='lbrv'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='tsc-scale'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='vmcb-clean'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='flushbyasid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='pause-filter'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='pfthreshold'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='svme-addr-chk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='rdctl-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='mds-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='pschange-mc-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='gds-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='require' name='rfds-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <feature policy='disable' name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    </mode>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:    <mode name='custom' supported='yes'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-noTSX'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Broadwell-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cascadelake-Server-v5'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cooperlake'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cooperlake-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Cooperlake-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Denverton'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mpx'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Denverton-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mpx'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Denverton-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Denverton-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Dhyana-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Genoa'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amd-psfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='auto-ibrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='no-nested-data-bp'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='null-sel-clr-base'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='stibp-always-on'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Genoa-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amd-psfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='auto-ibrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='no-nested-data-bp'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='null-sel-clr-base'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='stibp-always-on'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Milan'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Milan-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Milan-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amd-psfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='no-nested-data-bp'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='null-sel-clr-base'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='stibp-always-on'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Rome'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Rome-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Rome-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-Rome-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='EPYC-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='GraniteRapids'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mcdt-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pbrsb-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='prefetchiti'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='GraniteRapids-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mcdt-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pbrsb-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='prefetchiti'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='GraniteRapids-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx10'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx10-128'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx10-256'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx10-512'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='mcdt-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pbrsb-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='prefetchiti'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-noTSX'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-v1'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-v2'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-v3'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Haswell-v4'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server'>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:50 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-noTSX'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v1'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v2'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v3'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v4'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v5'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v6'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Icelake-Server-v7'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='IvyBridge'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='IvyBridge-IBRS'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='IvyBridge-v1'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='IvyBridge-v2'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='KnightsMill'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-4fmaps'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-4vnniw'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512er'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512pf'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='KnightsMill-v1'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-4fmaps'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-4vnniw'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512er'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512pf'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Opteron_G4'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fma4'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='xop'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Opteron_G4-v1'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fma4'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='xop'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Opteron_G5'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fma4'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='tbm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='xop'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Opteron_G5-v1'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fma4'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='tbm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='xop'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='SapphireRapids'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='SapphireRapids-v1'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='SapphireRapids-v2'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='SapphireRapids-v3'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='amx-bf16'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='amx-int8'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='amx-tile'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-bf16'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-fp16'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bitalg'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512ifma'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vnni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fsrc'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fzrm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='la57'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='taa-no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='xfd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='SierraForest'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx-ifma'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx-ne-convert'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx-vnni-int8'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='cmpccxadd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='mcdt-no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pbrsb-no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='SierraForest-v1'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx-ifma'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx-ne-convert'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx-vnni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx-vnni-int8'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='cmpccxadd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fbsdp-no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fsrm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='fsrs'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='ibrs-all'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='mcdt-no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pbrsb-no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='psdp-no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='serialize'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vaes'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client-IBRS'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client-v1'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client-v2'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client-v3'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Client-v4'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-IBRS'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-v1'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-v2'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='hle'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='rtm'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-v3'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-v4'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Skylake-Server-v5'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512bw'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512cd'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512dq'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512f'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='avx512vl'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='invpcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pcid'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='pku'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Snowridge'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='core-capability'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='mpx'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='split-lock-detect'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Snowridge-v1'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='core-capability'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='mpx'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='split-lock-detect'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Snowridge-v2'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='core-capability'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='split-lock-detect'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Snowridge-v3'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='core-capability'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='split-lock-detect'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='Snowridge-v4'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='cldemote'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='erms'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='gfni'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='movdir64b'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='movdiri'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='xsaves'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='athlon'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='3dnow'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='3dnowext'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='athlon-v1'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='3dnow'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='3dnowext'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='core2duo'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='core2duo-v1'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='coreduo'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='coreduo-v1'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='n270'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='n270-v1'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='ss'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='phenom'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='3dnow'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='3dnowext'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <blockers model='phenom-v1'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='3dnow'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <feature name='3dnowext'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </blockers>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    </mode>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:  </cpu>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:  <memoryBacking supported='yes'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    <enum name='sourceType'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <value>file</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <value>anonymous</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <value>memfd</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    </enum>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:  </memoryBacking>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:  <devices>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    <disk supported='yes'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <enum name='diskDevice'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>disk</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>cdrom</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>floppy</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>lun</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <enum name='bus'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>fdc</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>scsi</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>virtio</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>usb</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>sata</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <enum name='model'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>virtio</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>virtio-transitional</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>virtio-non-transitional</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    </disk>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    <graphics supported='yes'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <enum name='type'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>vnc</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>egl-headless</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>dbus</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    </graphics>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    <video supported='yes'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <enum name='modelType'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>vga</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>cirrus</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>virtio</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>none</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>bochs</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>ramfb</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    </video>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    <hostdev supported='yes'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <enum name='mode'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>subsystem</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <enum name='startupPolicy'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>default</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>mandatory</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>requisite</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>optional</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <enum name='subsysType'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>usb</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>pci</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>scsi</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <enum name='capsType'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <enum name='pciBackend'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    </hostdev>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    <rng supported='yes'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <enum name='model'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>virtio</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>virtio-transitional</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>virtio-non-transitional</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <enum name='backendModel'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>random</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>egd</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>builtin</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    </rng>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    <filesystem supported='yes'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <enum name='driverType'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>path</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>handle</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>virtiofs</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    </filesystem>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    <tpm supported='yes'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <enum name='model'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>tpm-tis</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>tpm-crb</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <enum name='backendModel'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>emulator</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>external</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <enum name='backendVersion'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>2.0</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    </tpm>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    <redirdev supported='yes'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <enum name='bus'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>usb</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    </redirdev>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    <channel supported='yes'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <enum name='type'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>pty</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>unix</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    </channel>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    <crypto supported='yes'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <enum name='model'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <enum name='type'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>qemu</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <enum name='backendModel'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>builtin</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    </crypto>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    <interface supported='yes'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <enum name='backendType'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>default</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>passt</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    </interface>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    <panic supported='yes'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <enum name='model'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>isa</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>hyperv</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    </panic>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:  </devices>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:  <features>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    <gic supported='no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    <vmcoreinfo supported='yes'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    <genid supported='yes'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    <backingStoreInput supported='yes'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    <backup supported='yes'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    <async-teardown supported='yes'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    <ps2 supported='yes'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    <sev supported='no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    <sgx supported='no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    <hyperv supported='yes'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      <enum name='features'>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>relaxed</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>vapic</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>spinlocks</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>vpindex</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>runtime</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>synic</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>stimer</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>reset</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>vendor_id</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>frequencies</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>reenlightenment</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>tlbflush</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>ipi</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>avic</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>emsr_bitmap</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:        <value>xmm_input</value>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:      </enum>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    </hyperv>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:    <launchSecurity supported='no'/>
Oct 13 11:34:51 np0005485008 nova_compute[191570]:  </features>
Oct 13 11:34:51 np0005485008 nova_compute[191570]: </domainCapabilities>
Oct 13 11:34:51 np0005485008 nova_compute[191570]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:50.943 2 DEBUG nova.virt.libvirt.host [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:50.944 2 DEBUG nova.virt.libvirt.host [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:50.944 2 DEBUG nova.virt.libvirt.host [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:50.944 2 INFO nova.virt.libvirt.host [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Secure Boot support detected#033[00m
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:50.947 2 INFO nova.virt.libvirt.driver [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:50.947 2 INFO nova.virt.libvirt.driver [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:50.958 2 DEBUG nova.virt.libvirt.driver [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:51.039 2 INFO nova.virt.node [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Determined node identity b038b2e7-0dfd-4adb-a174-3db2b96fc8ce from /var/lib/nova/compute_id#033[00m
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:51.054 2 WARNING nova.compute.manager [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Compute nodes ['b038b2e7-0dfd-4adb-a174-3db2b96fc8ce'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:51.077 2 INFO nova.compute.manager [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:51.119 2 WARNING nova.compute.manager [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:51.120 2 DEBUG oslo_concurrency.lockutils [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:51.120 2 DEBUG oslo_concurrency.lockutils [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:51.120 2 DEBUG oslo_concurrency.lockutils [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:51.121 2 DEBUG nova.compute.resource_tracker [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:34:51 np0005485008 systemd[1]: Starting libvirt nodedev daemon...
Oct 13 11:34:51 np0005485008 systemd[1]: Started libvirt nodedev daemon.
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:51.395 2 WARNING nova.virt.libvirt.driver [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:51.396 2 DEBUG nova.compute.resource_tracker [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=6163MB free_disk=73.67123413085938GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:51.396 2 DEBUG oslo_concurrency.lockutils [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:51.396 2 DEBUG oslo_concurrency.lockutils [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:51.418 2 WARNING nova.compute.resource_tracker [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] No compute node record for compute-1.ctlplane.example.com:b038b2e7-0dfd-4adb-a174-3db2b96fc8ce: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host b038b2e7-0dfd-4adb-a174-3db2b96fc8ce could not be found.#033[00m
Oct 13 11:34:51 np0005485008 python3.9[192427]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:51.448 2 INFO nova.compute.resource_tracker [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Compute node record created for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com with uuid: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce#033[00m
Oct 13 11:34:51 np0005485008 systemd[1]: Stopping nova_compute container...
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:51.526 2 DEBUG nova.compute.resource_tracker [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:51.526 2 DEBUG nova.compute.resource_tracker [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:51.532 2 DEBUG oslo_concurrency.lockutils [None req-50848102-788d-4d69-a73a-3d758b2edafc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:51.532 2 DEBUG oslo_concurrency.lockutils [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:51.533 2 DEBUG oslo_concurrency.lockutils [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:34:51 np0005485008 nova_compute[191570]: 2025-10-13 15:34:51.533 2 DEBUG oslo_concurrency.lockutils [None req-83443ecd-1257-47b7-b54b-94d4cf047431 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:34:51 np0005485008 virtqemud[192082]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 13 11:34:51 np0005485008 virtqemud[192082]: hostname: compute-1
Oct 13 11:34:51 np0005485008 virtqemud[192082]: End of file while reading data: Input/output error
Oct 13 11:34:51 np0005485008 systemd[1]: libpod-907bcc45fc9859d353a6588046f809c83e9e8c20e54ec85d7391c717b4e39261.scope: Deactivated successfully.
Oct 13 11:34:51 np0005485008 systemd[1]: libpod-907bcc45fc9859d353a6588046f809c83e9e8c20e54ec85d7391c717b4e39261.scope: Consumed 3.203s CPU time.
Oct 13 11:34:51 np0005485008 podman[192453]: 2025-10-13 15:34:51.895831734 +0000 UTC m=+0.400818280 container died 907bcc45fc9859d353a6588046f809c83e9e8c20e54ec85d7391c717b4e39261 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:34:51 np0005485008 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-907bcc45fc9859d353a6588046f809c83e9e8c20e54ec85d7391c717b4e39261-userdata-shm.mount: Deactivated successfully.
Oct 13 11:34:51 np0005485008 systemd[1]: var-lib-containers-storage-overlay-2f6c21e527631321e0490147eb51cb4cedc32b415cdecf45ed9b201a2944e48d-merged.mount: Deactivated successfully.
Oct 13 11:34:51 np0005485008 podman[192453]: 2025-10-13 15:34:51.964563947 +0000 UTC m=+0.469550533 container cleanup 907bcc45fc9859d353a6588046f809c83e9e8c20e54ec85d7391c717b4e39261 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 11:34:51 np0005485008 podman[192453]: nova_compute
Oct 13 11:34:52 np0005485008 podman[192483]: nova_compute
Oct 13 11:34:52 np0005485008 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct 13 11:34:52 np0005485008 systemd[1]: Stopped nova_compute container.
Oct 13 11:34:52 np0005485008 systemd[1]: Starting nova_compute container...
Oct 13 11:34:52 np0005485008 systemd[1]: Started libcrun container.
Oct 13 11:34:52 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f6c21e527631321e0490147eb51cb4cedc32b415cdecf45ed9b201a2944e48d/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 13 11:34:52 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f6c21e527631321e0490147eb51cb4cedc32b415cdecf45ed9b201a2944e48d/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 13 11:34:52 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f6c21e527631321e0490147eb51cb4cedc32b415cdecf45ed9b201a2944e48d/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 13 11:34:52 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f6c21e527631321e0490147eb51cb4cedc32b415cdecf45ed9b201a2944e48d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 11:34:52 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f6c21e527631321e0490147eb51cb4cedc32b415cdecf45ed9b201a2944e48d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 13 11:34:52 np0005485008 podman[192496]: 2025-10-13 15:34:52.216362923 +0000 UTC m=+0.122917284 container init 907bcc45fc9859d353a6588046f809c83e9e8c20e54ec85d7391c717b4e39261 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 11:34:52 np0005485008 podman[192496]: 2025-10-13 15:34:52.230654717 +0000 UTC m=+0.137208988 container start 907bcc45fc9859d353a6588046f809c83e9e8c20e54ec85d7391c717b4e39261 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 11:34:52 np0005485008 podman[192496]: nova_compute
Oct 13 11:34:52 np0005485008 nova_compute[192512]: + sudo -E kolla_set_configs
Oct 13 11:34:52 np0005485008 systemd[1]: Started nova_compute container.
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Validating config file
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Copying service configuration files
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Deleting /etc/ceph
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Creating directory /etc/ceph
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Setting permission for /etc/ceph
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Writing out command to execute
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 13 11:34:52 np0005485008 nova_compute[192512]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 13 11:34:52 np0005485008 nova_compute[192512]: ++ cat /run_command
Oct 13 11:34:52 np0005485008 nova_compute[192512]: + CMD=nova-compute
Oct 13 11:34:52 np0005485008 nova_compute[192512]: + ARGS=
Oct 13 11:34:52 np0005485008 nova_compute[192512]: + sudo kolla_copy_cacerts
Oct 13 11:34:52 np0005485008 nova_compute[192512]: + [[ ! -n '' ]]
Oct 13 11:34:52 np0005485008 nova_compute[192512]: + . kolla_extend_start
Oct 13 11:34:52 np0005485008 nova_compute[192512]: Running command: 'nova-compute'
Oct 13 11:34:52 np0005485008 nova_compute[192512]: + echo 'Running command: '\''nova-compute'\'''
Oct 13 11:34:52 np0005485008 nova_compute[192512]: + umask 0022
Oct 13 11:34:52 np0005485008 nova_compute[192512]: + exec nova-compute
Oct 13 11:34:53 np0005485008 python3.9[192675]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 13 11:34:53 np0005485008 systemd[1]: Started libpod-conmon-1b4d5f7d53d0445e21107978c62c747e10936a56b2bdb1a84afd10526fcf2a41.scope.
Oct 13 11:34:53 np0005485008 systemd[1]: Started libcrun container.
Oct 13 11:34:53 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3896e1ec59952fee5f90654756c5f055be0b7d86bff4738e2c1998f3b109634d/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct 13 11:34:53 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3896e1ec59952fee5f90654756c5f055be0b7d86bff4738e2c1998f3b109634d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 13 11:34:53 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3896e1ec59952fee5f90654756c5f055be0b7d86bff4738e2c1998f3b109634d/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 13 11:34:53 np0005485008 podman[192700]: 2025-10-13 15:34:53.351386337 +0000 UTC m=+0.146952157 container init 1b4d5f7d53d0445e21107978c62c747e10936a56b2bdb1a84afd10526fcf2a41 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 11:34:53 np0005485008 podman[192700]: 2025-10-13 15:34:53.365285939 +0000 UTC m=+0.160851749 container start 1b4d5f7d53d0445e21107978c62c747e10936a56b2bdb1a84afd10526fcf2a41 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct 13 11:34:53 np0005485008 python3.9[192675]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct 13 11:34:53 np0005485008 nova_compute_init[192721]: INFO:nova_statedir:Applying nova statedir ownership
Oct 13 11:34:53 np0005485008 nova_compute_init[192721]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct 13 11:34:53 np0005485008 nova_compute_init[192721]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct 13 11:34:53 np0005485008 nova_compute_init[192721]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct 13 11:34:53 np0005485008 nova_compute_init[192721]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct 13 11:34:53 np0005485008 nova_compute_init[192721]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct 13 11:34:53 np0005485008 nova_compute_init[192721]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct 13 11:34:53 np0005485008 nova_compute_init[192721]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct 13 11:34:53 np0005485008 nova_compute_init[192721]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct 13 11:34:53 np0005485008 nova_compute_init[192721]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct 13 11:34:53 np0005485008 nova_compute_init[192721]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct 13 11:34:53 np0005485008 nova_compute_init[192721]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct 13 11:34:53 np0005485008 nova_compute_init[192721]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct 13 11:34:53 np0005485008 nova_compute_init[192721]: INFO:nova_statedir:Nova statedir ownership complete
Oct 13 11:34:53 np0005485008 systemd[1]: libpod-1b4d5f7d53d0445e21107978c62c747e10936a56b2bdb1a84afd10526fcf2a41.scope: Deactivated successfully.
Oct 13 11:34:53 np0005485008 podman[192722]: 2025-10-13 15:34:53.449675119 +0000 UTC m=+0.046264350 container died 1b4d5f7d53d0445e21107978c62c747e10936a56b2bdb1a84afd10526fcf2a41 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init)
Oct 13 11:34:53 np0005485008 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1b4d5f7d53d0445e21107978c62c747e10936a56b2bdb1a84afd10526fcf2a41-userdata-shm.mount: Deactivated successfully.
Oct 13 11:34:53 np0005485008 systemd[1]: var-lib-containers-storage-overlay-3896e1ec59952fee5f90654756c5f055be0b7d86bff4738e2c1998f3b109634d-merged.mount: Deactivated successfully.
Oct 13 11:34:53 np0005485008 podman[192735]: 2025-10-13 15:34:53.524120014 +0000 UTC m=+0.064153449 container cleanup 1b4d5f7d53d0445e21107978c62c747e10936a56b2bdb1a84afd10526fcf2a41 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:34:53 np0005485008 systemd[1]: libpod-conmon-1b4d5f7d53d0445e21107978c62c747e10936a56b2bdb1a84afd10526fcf2a41.scope: Deactivated successfully.
Oct 13 11:34:54 np0005485008 systemd[1]: session-26.scope: Deactivated successfully.
Oct 13 11:34:54 np0005485008 systemd[1]: session-26.scope: Consumed 2min 26.489s CPU time.
Oct 13 11:34:54 np0005485008 systemd-logind[784]: Session 26 logged out. Waiting for processes to exit.
Oct 13 11:34:54 np0005485008 systemd-logind[784]: Removed session 26.
Oct 13 11:34:54 np0005485008 nova_compute[192512]: 2025-10-13 15:34:54.248 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 13 11:34:54 np0005485008 nova_compute[192512]: 2025-10-13 15:34:54.248 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 13 11:34:54 np0005485008 nova_compute[192512]: 2025-10-13 15:34:54.248 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct 13 11:34:54 np0005485008 nova_compute[192512]: 2025-10-13 15:34:54.249 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct 13 11:34:54 np0005485008 nova_compute[192512]: 2025-10-13 15:34:54.389 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:34:54 np0005485008 nova_compute[192512]: 2025-10-13 15:34:54.417 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:34:54 np0005485008 nova_compute[192512]: 2025-10-13 15:34:54.952 2 INFO nova.virt.driver [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.048 2 INFO nova.compute.provider_config [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.056 2 DEBUG oslo_concurrency.lockutils [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.057 2 DEBUG oslo_concurrency.lockutils [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.057 2 DEBUG oslo_concurrency.lockutils [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.057 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.057 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.057 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.058 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.058 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.058 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.058 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.058 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.059 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.059 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.059 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.059 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.059 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.059 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.060 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.060 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.060 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.060 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.060 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.060 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.060 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.061 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.061 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.061 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.061 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.061 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.061 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.062 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.062 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.062 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.062 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.062 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.062 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.062 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.063 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.063 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.063 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.063 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.063 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.063 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.064 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.064 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.064 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.064 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.064 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.064 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.064 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.065 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.065 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.065 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.065 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.065 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.065 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.065 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.066 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.066 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.066 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.066 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.066 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.066 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.066 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.067 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.067 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.067 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.067 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.067 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.067 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.067 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.068 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.068 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.068 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.068 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.068 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.068 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.068 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.069 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.069 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.069 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.069 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.069 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.069 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.069 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.070 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.070 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.070 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.070 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.070 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.070 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.070 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.071 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.071 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.071 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.071 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.071 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.071 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.071 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.072 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.072 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.072 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.072 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.072 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.072 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.072 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.073 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.073 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.073 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.073 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.073 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.073 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.073 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.073 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.074 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.074 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.074 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.074 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.074 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.074 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.074 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.075 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.075 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.075 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.075 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.075 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.075 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.075 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.076 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.076 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.076 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.076 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.076 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.076 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.076 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.077 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.077 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.077 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.077 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.077 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.077 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.077 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.078 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.078 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.078 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.078 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.078 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.078 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.078 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.079 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.079 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.079 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.079 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.079 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.080 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.080 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.080 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.080 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.080 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.080 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.081 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.081 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.081 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.081 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.081 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.082 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.082 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.082 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.082 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.082 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.082 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.083 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.083 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.083 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.083 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.083 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.083 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.084 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.084 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.084 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.084 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.084 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.085 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.085 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.085 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.085 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.085 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.085 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.086 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.086 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.086 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.086 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.086 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.087 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.087 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.087 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.087 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.088 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.088 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.088 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.088 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.088 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.089 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.089 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.089 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.089 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.089 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.089 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.090 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.090 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.090 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.090 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.090 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.090 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.091 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.091 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.091 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.091 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.091 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.091 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.092 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.092 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.092 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.092 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.092 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.092 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.093 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.093 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.093 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.093 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.093 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.093 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.093 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.094 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.094 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.094 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.094 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.094 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.094 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.094 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.095 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.095 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.095 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.095 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.095 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.095 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.095 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.096 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.096 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.096 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.096 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.096 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.096 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.096 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.097 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.097 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.097 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.097 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.097 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.097 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.097 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.098 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.098 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.098 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.098 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.098 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.098 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.099 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.099 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.099 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.099 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.099 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.099 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.099 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.100 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.100 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.100 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.100 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.100 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.100 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.100 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.101 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.101 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.101 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.101 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.101 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.101 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.101 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.102 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.102 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.102 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.102 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.102 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.102 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.102 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.103 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.103 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.103 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.103 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.103 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.103 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.104 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.104 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.104 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.104 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.104 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.104 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.104 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.105 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.105 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.105 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.105 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.105 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.105 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.105 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.106 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.106 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.106 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.106 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.106 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.106 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.106 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.107 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.107 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.107 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.107 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.107 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.107 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.107 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.108 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.108 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.108 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.108 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.108 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.108 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.108 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.109 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.109 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.109 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.109 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.109 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.109 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.109 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.110 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.110 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.110 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.110 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.110 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.110 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.110 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.111 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.111 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.111 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.111 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.111 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.111 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.111 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.112 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.112 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.112 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.112 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.112 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.112 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.113 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.113 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.113 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.113 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.113 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.113 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.113 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.114 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.114 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.114 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.114 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.114 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.114 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.114 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.114 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.115 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.115 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.115 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.115 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.115 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.115 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.115 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.116 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.116 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.116 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.116 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.116 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.116 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.116 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.117 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.117 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.117 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.117 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.117 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.117 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.117 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.118 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.118 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.118 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.118 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.118 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.118 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.118 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.118 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.119 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.119 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.119 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.119 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.119 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.119 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.119 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.120 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.120 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.120 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.120 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.120 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.120 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.120 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.121 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.121 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.121 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.121 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.121 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.121 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.121 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.121 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.122 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.122 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.122 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.122 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.122 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.122 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.122 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.123 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.123 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.123 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.123 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.123 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.123 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.123 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.124 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.124 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.124 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.124 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.124 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.124 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.124 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.124 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.125 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.125 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.125 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.125 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.125 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.125 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.125 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.126 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.126 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.126 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.126 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.126 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.126 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.126 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.127 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.127 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.127 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.127 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.127 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.127 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.127 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.128 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.128 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.128 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.128 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.128 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.128 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.128 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.129 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.129 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.129 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.129 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.129 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.129 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.129 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.130 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.130 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.130 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.130 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.130 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.130 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.130 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.130 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.131 2 WARNING oslo_config.cfg [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 13 11:34:55 np0005485008 nova_compute[192512]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 13 11:34:55 np0005485008 nova_compute[192512]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 13 11:34:55 np0005485008 nova_compute[192512]: and ``live_migration_inbound_addr`` respectively.
Oct 13 11:34:55 np0005485008 nova_compute[192512]: ).  Its value may be silently ignored in the future.#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.131 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.131 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.131 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.131 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.131 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.132 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.132 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.132 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.132 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.132 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.132 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.133 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.133 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.133 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.133 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.133 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.133 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.133 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.134 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.134 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.134 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.134 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.134 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.134 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.134 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.134 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.135 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.135 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.135 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.135 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.135 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.135 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.136 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.136 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.136 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.136 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.136 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.136 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.136 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.137 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.137 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.137 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.137 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.137 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.137 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.137 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.138 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.138 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.138 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.138 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.138 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.138 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.138 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.139 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.139 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.139 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.139 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.139 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.139 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.139 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.140 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.140 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.140 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.140 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.140 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.140 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.140 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.141 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.141 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.141 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.141 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.141 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.141 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.141 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.142 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.142 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.142 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.142 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.142 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.142 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.142 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.143 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.143 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.143 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.143 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.143 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.143 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.144 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.144 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.144 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.144 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.144 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.144 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.144 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.145 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.145 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.145 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.145 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.145 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.145 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.146 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.146 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.146 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.146 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.146 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.146 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.146 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.147 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.147 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.147 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.147 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.147 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.147 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.147 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.148 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.148 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.148 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.148 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.148 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.148 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.148 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.149 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.149 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.149 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.149 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.149 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.149 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.149 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.150 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.150 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.150 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.150 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.150 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.150 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.150 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.151 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.151 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.151 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.151 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.151 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.151 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.151 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.152 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.152 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.152 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.152 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.152 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.152 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.152 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.153 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.153 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.153 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.153 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.153 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.153 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.154 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.154 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.154 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.154 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.154 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.154 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.154 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.155 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.155 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.155 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.155 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.155 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.155 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.156 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.156 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.156 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.156 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.156 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.156 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.156 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.157 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.157 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.157 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.157 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.157 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.157 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.157 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.158 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.158 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.158 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.158 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.158 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.158 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.158 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.159 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.159 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.159 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.159 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.159 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.159 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.159 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.160 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.160 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.160 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.160 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.160 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.160 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.161 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.161 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.161 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.161 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.161 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.161 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.161 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.162 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.162 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.162 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.162 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.162 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.162 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.162 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.163 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.163 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.163 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.163 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.163 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.163 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.163 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.164 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.164 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.164 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.164 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.164 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.164 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.164 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.165 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.165 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.165 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.165 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.165 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.165 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.165 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.166 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.166 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.166 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.166 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.166 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.166 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.166 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.167 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.167 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.167 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.167 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.167 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.168 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.168 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.168 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.168 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.168 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.168 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.168 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.169 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.169 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.169 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.169 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.169 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.169 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.169 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.170 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.170 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.170 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.170 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.170 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.170 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.171 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.171 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.171 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.171 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.171 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.171 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.171 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.172 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.172 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.172 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.172 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.172 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.172 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.172 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.173 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.173 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.173 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.173 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.173 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.173 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.173 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.174 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.174 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.174 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.174 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.174 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.174 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.174 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.175 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.175 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.175 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.175 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.175 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.175 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.176 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.176 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.176 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.176 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.176 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.176 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.176 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.177 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.177 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.177 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.177 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.177 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.177 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.177 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.178 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.178 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.178 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.178 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.178 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.178 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.179 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.179 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.179 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.179 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.179 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.180 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.180 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.180 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.180 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.180 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.180 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.180 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.181 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.181 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.181 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.181 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.181 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.181 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.181 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.182 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.182 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.182 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.182 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.182 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.182 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.182 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.182 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.183 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.183 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.183 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.183 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.183 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.183 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.183 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.184 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.184 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.184 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.184 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.184 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.184 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.184 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.184 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.185 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.185 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.185 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.185 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.185 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.185 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.185 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.186 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.186 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.186 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.186 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.186 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.186 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.186 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.187 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.187 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.187 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.187 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.187 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.187 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.187 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.188 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.188 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.188 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.188 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.188 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.188 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.188 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.189 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.189 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.189 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.189 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.189 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.189 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.190 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.190 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.190 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.190 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.190 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.191 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.191 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.191 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.191 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.192 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.192 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.192 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.192 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.192 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.192 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.193 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.193 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.193 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.193 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.193 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.194 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.194 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.194 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.194 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.194 2 DEBUG oslo_service.service [None req-992dc001-a687-426a-8543-d4e5c0059c6d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.196 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.212 2 INFO nova.virt.node [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Determined node identity b038b2e7-0dfd-4adb-a174-3db2b96fc8ce from /var/lib/nova/compute_id#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.213 2 DEBUG nova.virt.libvirt.host [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.214 2 DEBUG nova.virt.libvirt.host [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.214 2 DEBUG nova.virt.libvirt.host [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.214 2 DEBUG nova.virt.libvirt.host [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.228 2 DEBUG nova.virt.libvirt.host [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f7ab7fcf430> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.230 2 DEBUG nova.virt.libvirt.host [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f7ab7fcf430> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.231 2 INFO nova.virt.libvirt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Connection event '1' reason 'None'#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.239 2 INFO nova.virt.libvirt.host [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Libvirt host capabilities <capabilities>
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <host>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <uuid>48631c7f-32de-410b-ace1-0785fa2d7327</uuid>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <cpu>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <arch>x86_64</arch>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model>EPYC-Rome-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <vendor>AMD</vendor>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <microcode version='16777317'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <signature family='23' model='49' stepping='0'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <maxphysaddr mode='emulate' bits='40'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature name='x2apic'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature name='tsc-deadline'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature name='osxsave'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature name='hypervisor'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature name='tsc_adjust'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature name='spec-ctrl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature name='stibp'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature name='arch-capabilities'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature name='ssbd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature name='cmp_legacy'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature name='topoext'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature name='virt-ssbd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature name='lbrv'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature name='tsc-scale'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature name='vmcb-clean'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature name='pause-filter'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature name='pfthreshold'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature name='svme-addr-chk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature name='rdctl-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature name='skip-l1dfl-vmentry'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature name='mds-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature name='pschange-mc-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <pages unit='KiB' size='4'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <pages unit='KiB' size='2048'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <pages unit='KiB' size='1048576'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </cpu>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <power_management>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <suspend_mem/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <suspend_disk/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <suspend_hybrid/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </power_management>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <iommu support='no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <migration_features>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <live/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <uri_transports>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <uri_transport>tcp</uri_transport>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <uri_transport>rdma</uri_transport>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </uri_transports>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </migration_features>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <topology>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <cells num='1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <cell id='0'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:          <memory unit='KiB'>7864352</memory>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:          <pages unit='KiB' size='4'>1966088</pages>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:          <pages unit='KiB' size='2048'>0</pages>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:          <pages unit='KiB' size='1048576'>0</pages>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:          <distances>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:            <sibling id='0' value='10'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:          </distances>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:          <cpus num='8'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:          </cpus>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        </cell>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </cells>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </topology>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <cache>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </cache>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <secmodel>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model>selinux</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <doi>0</doi>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </secmodel>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <secmodel>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model>dac</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <doi>0</doi>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </secmodel>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  </host>
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <guest>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <os_type>hvm</os_type>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <arch name='i686'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <wordsize>32</wordsize>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <domain type='qemu'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <domain type='kvm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </arch>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <features>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <pae/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <nonpae/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <acpi default='on' toggle='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <apic default='on' toggle='no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <cpuselection/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <deviceboot/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <disksnapshot default='on' toggle='no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <externalSnapshot/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </features>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  </guest>
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <guest>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <os_type>hvm</os_type>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <arch name='x86_64'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <wordsize>64</wordsize>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <domain type='qemu'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <domain type='kvm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </arch>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <features>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <acpi default='on' toggle='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <apic default='on' toggle='no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <cpuselection/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <deviceboot/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <disksnapshot default='on' toggle='no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <externalSnapshot/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </features>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  </guest>
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 
Oct 13 11:34:55 np0005485008 nova_compute[192512]: </capabilities>
Oct 13 11:34:55 np0005485008 nova_compute[192512]: #033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.246 2 DEBUG nova.virt.libvirt.volume.mount [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.254 2 DEBUG nova.virt.libvirt.host [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.263 2 DEBUG nova.virt.libvirt.host [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 13 11:34:55 np0005485008 nova_compute[192512]: <domainCapabilities>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <path>/usr/libexec/qemu-kvm</path>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <domain>kvm</domain>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <machine>pc-q35-rhel9.6.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <arch>i686</arch>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <vcpu max='4096'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <iothreads supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <os supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <enum name='firmware'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <loader supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='type'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>rom</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>pflash</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='readonly'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>yes</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>no</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='secure'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>no</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </loader>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  </os>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <cpu>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <mode name='host-passthrough' supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='hostPassthroughMigratable'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>on</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>off</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </mode>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <mode name='maximum' supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='maximumMigratable'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>on</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>off</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </mode>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <mode name='host-model' supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <vendor>AMD</vendor>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='x2apic'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='tsc-deadline'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='hypervisor'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='tsc_adjust'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='spec-ctrl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='stibp'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='arch-capabilities'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='ssbd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='cmp_legacy'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='overflow-recov'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='succor'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='ibrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='amd-ssbd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='virt-ssbd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='lbrv'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='tsc-scale'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='vmcb-clean'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='flushbyasid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='pause-filter'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='pfthreshold'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='svme-addr-chk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='rdctl-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='mds-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='pschange-mc-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='gds-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='rfds-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='disable' name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </mode>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <mode name='custom' supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-noTSX'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server-v5'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cooperlake'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cooperlake-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cooperlake-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Denverton'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mpx'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Denverton-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mpx'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Denverton-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Denverton-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Dhyana-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Genoa'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amd-psfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='auto-ibrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='no-nested-data-bp'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='null-sel-clr-base'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='stibp-always-on'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Genoa-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amd-psfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='auto-ibrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='no-nested-data-bp'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='null-sel-clr-base'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='stibp-always-on'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Milan'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Milan-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Milan-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amd-psfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='no-nested-data-bp'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='null-sel-clr-base'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='stibp-always-on'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Rome'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Rome-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Rome-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Rome-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='GraniteRapids'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mcdt-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pbrsb-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='prefetchiti'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='GraniteRapids-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mcdt-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pbrsb-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='prefetchiti'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='GraniteRapids-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx10'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx10-128'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx10-256'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx10-512'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mcdt-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pbrsb-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='prefetchiti'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-noTSX'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-noTSX'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v5'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v6'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v7'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='IvyBridge'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='IvyBridge-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='IvyBridge-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='IvyBridge-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='KnightsMill'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-4fmaps'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-4vnniw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512er'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512pf'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='KnightsMill-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-4fmaps'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-4vnniw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512er'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512pf'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Opteron_G4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fma4'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xop'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Opteron_G4-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fma4'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xop'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Opteron_G5'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fma4'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tbm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xop'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Opteron_G5-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fma4'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tbm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xop'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='SapphireRapids'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='SapphireRapids-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='SapphireRapids-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='SapphireRapids-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='SierraForest'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-ne-convert'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cmpccxadd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mcdt-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pbrsb-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='SierraForest-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-ne-convert'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cmpccxadd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mcdt-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pbrsb-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-v5'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Snowridge'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='core-capability'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mpx'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='split-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Snowridge-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='core-capability'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mpx'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='split-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Snowridge-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='core-capability'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='split-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Snowridge-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='core-capability'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='split-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Snowridge-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='athlon'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnow'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnowext'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='athlon-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnow'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnowext'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='core2duo'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='core2duo-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='coreduo'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='coreduo-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='n270'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='n270-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='phenom'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnow'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnowext'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='phenom-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnow'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnowext'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </mode>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  </cpu>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <memoryBacking supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <enum name='sourceType'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <value>file</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <value>anonymous</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <value>memfd</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  </memoryBacking>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <devices>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <disk supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='diskDevice'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>disk</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>cdrom</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>floppy</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>lun</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='bus'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>fdc</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>scsi</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>usb</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>sata</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='model'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio-transitional</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio-non-transitional</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </disk>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <graphics supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='type'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>vnc</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>egl-headless</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>dbus</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </graphics>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <video supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='modelType'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>vga</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>cirrus</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>none</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>bochs</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>ramfb</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </video>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <hostdev supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='mode'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>subsystem</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='startupPolicy'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>default</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>mandatory</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>requisite</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>optional</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='subsysType'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>usb</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>pci</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>scsi</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='capsType'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='pciBackend'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </hostdev>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <rng supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='model'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio-transitional</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio-non-transitional</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='backendModel'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>random</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>egd</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>builtin</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </rng>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <filesystem supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='driverType'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>path</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>handle</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtiofs</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </filesystem>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <tpm supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='model'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>tpm-tis</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>tpm-crb</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='backendModel'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>emulator</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>external</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='backendVersion'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>2.0</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </tpm>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <redirdev supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='bus'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>usb</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </redirdev>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <channel supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='type'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>pty</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>unix</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </channel>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <crypto supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='model'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='type'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>qemu</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='backendModel'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>builtin</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </crypto>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <interface supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='backendType'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>default</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>passt</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </interface>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <panic supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='model'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>isa</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>hyperv</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </panic>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  </devices>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <features>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <gic supported='no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <vmcoreinfo supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <genid supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <backingStoreInput supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <backup supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <async-teardown supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <ps2 supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <sev supported='no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <sgx supported='no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <hyperv supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='features'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>relaxed</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>vapic</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>spinlocks</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>vpindex</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>runtime</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>synic</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>stimer</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>reset</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>vendor_id</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>frequencies</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>reenlightenment</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>tlbflush</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>ipi</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>avic</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>emsr_bitmap</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>xmm_input</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </hyperv>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <launchSecurity supported='no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  </features>
Oct 13 11:34:55 np0005485008 nova_compute[192512]: </domainCapabilities>
Oct 13 11:34:55 np0005485008 nova_compute[192512]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.269 2 DEBUG nova.virt.libvirt.host [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 13 11:34:55 np0005485008 nova_compute[192512]: <domainCapabilities>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <path>/usr/libexec/qemu-kvm</path>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <domain>kvm</domain>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <arch>i686</arch>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <vcpu max='240'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <iothreads supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <os supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <enum name='firmware'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <loader supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='type'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>rom</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>pflash</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='readonly'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>yes</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>no</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='secure'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>no</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </loader>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  </os>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <cpu>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <mode name='host-passthrough' supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='hostPassthroughMigratable'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>on</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>off</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </mode>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <mode name='maximum' supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='maximumMigratable'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>on</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>off</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </mode>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <mode name='host-model' supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <vendor>AMD</vendor>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='x2apic'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='tsc-deadline'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='hypervisor'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='tsc_adjust'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='spec-ctrl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='stibp'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='arch-capabilities'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='ssbd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='cmp_legacy'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='overflow-recov'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='succor'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='ibrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='amd-ssbd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='virt-ssbd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='lbrv'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='tsc-scale'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='vmcb-clean'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='flushbyasid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='pause-filter'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='pfthreshold'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='svme-addr-chk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='rdctl-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='mds-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='pschange-mc-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='gds-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='rfds-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='disable' name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </mode>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <mode name='custom' supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-noTSX'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server-v5'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cooperlake'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cooperlake-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cooperlake-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Denverton'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mpx'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Denverton-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mpx'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Denverton-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Denverton-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Dhyana-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Genoa'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amd-psfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='auto-ibrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='no-nested-data-bp'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='null-sel-clr-base'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='stibp-always-on'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Genoa-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amd-psfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='auto-ibrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='no-nested-data-bp'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='null-sel-clr-base'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='stibp-always-on'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Milan'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Milan-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Milan-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amd-psfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='no-nested-data-bp'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='null-sel-clr-base'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='stibp-always-on'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Rome'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Rome-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Rome-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Rome-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='GraniteRapids'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mcdt-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pbrsb-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='prefetchiti'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='GraniteRapids-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mcdt-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pbrsb-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='prefetchiti'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='GraniteRapids-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx10'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx10-128'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx10-256'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx10-512'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mcdt-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pbrsb-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='prefetchiti'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-noTSX'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-noTSX'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v5'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v6'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v7'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='IvyBridge'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='IvyBridge-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='IvyBridge-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='IvyBridge-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='KnightsMill'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-4fmaps'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-4vnniw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512er'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512pf'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='KnightsMill-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-4fmaps'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-4vnniw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512er'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512pf'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Opteron_G4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fma4'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xop'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Opteron_G4-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fma4'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xop'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Opteron_G5'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fma4'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tbm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xop'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Opteron_G5-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fma4'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tbm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xop'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='SapphireRapids'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='SapphireRapids-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='SapphireRapids-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='SapphireRapids-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='SierraForest'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-ne-convert'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cmpccxadd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mcdt-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pbrsb-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='SierraForest-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-ne-convert'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cmpccxadd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mcdt-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pbrsb-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-v5'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Snowridge'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='core-capability'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mpx'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='split-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Snowridge-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='core-capability'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mpx'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='split-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Snowridge-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='core-capability'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='split-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Snowridge-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='core-capability'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='split-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Snowridge-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='athlon'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnow'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnowext'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='athlon-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnow'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnowext'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='core2duo'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='core2duo-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='coreduo'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='coreduo-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='n270'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='n270-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='phenom'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnow'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnowext'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='phenom-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnow'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnowext'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </mode>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  </cpu>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <memoryBacking supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <enum name='sourceType'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <value>file</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <value>anonymous</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <value>memfd</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  </memoryBacking>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <devices>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <disk supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='diskDevice'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>disk</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>cdrom</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>floppy</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>lun</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='bus'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>ide</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>fdc</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>scsi</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>usb</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>sata</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='model'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio-transitional</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio-non-transitional</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </disk>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <graphics supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='type'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>vnc</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>egl-headless</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>dbus</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </graphics>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <video supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='modelType'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>vga</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>cirrus</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>none</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>bochs</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>ramfb</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </video>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <hostdev supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='mode'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>subsystem</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='startupPolicy'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>default</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>mandatory</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>requisite</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>optional</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='subsysType'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>usb</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>pci</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>scsi</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='capsType'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='pciBackend'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </hostdev>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <rng supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='model'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio-transitional</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio-non-transitional</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='backendModel'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>random</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>egd</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>builtin</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </rng>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <filesystem supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='driverType'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>path</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>handle</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtiofs</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </filesystem>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <tpm supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='model'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>tpm-tis</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>tpm-crb</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='backendModel'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>emulator</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>external</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='backendVersion'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>2.0</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </tpm>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <redirdev supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='bus'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>usb</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </redirdev>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <channel supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='type'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>pty</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>unix</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </channel>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <crypto supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='model'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='type'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>qemu</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='backendModel'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>builtin</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </crypto>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <interface supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='backendType'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>default</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>passt</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </interface>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <panic supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='model'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>isa</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>hyperv</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </panic>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  </devices>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <features>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <gic supported='no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <vmcoreinfo supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <genid supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <backingStoreInput supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <backup supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <async-teardown supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <ps2 supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <sev supported='no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <sgx supported='no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <hyperv supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='features'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>relaxed</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>vapic</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>spinlocks</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>vpindex</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>runtime</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>synic</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>stimer</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>reset</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>vendor_id</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>frequencies</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>reenlightenment</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>tlbflush</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>ipi</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>avic</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>emsr_bitmap</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>xmm_input</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </hyperv>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <launchSecurity supported='no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  </features>
Oct 13 11:34:55 np0005485008 nova_compute[192512]: </domainCapabilities>
Oct 13 11:34:55 np0005485008 nova_compute[192512]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.304 2 DEBUG nova.virt.libvirt.host [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.309 2 DEBUG nova.virt.libvirt.host [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 13 11:34:55 np0005485008 nova_compute[192512]: <domainCapabilities>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <path>/usr/libexec/qemu-kvm</path>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <domain>kvm</domain>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <machine>pc-q35-rhel9.6.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <arch>x86_64</arch>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <vcpu max='4096'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <iothreads supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <os supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <enum name='firmware'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <value>efi</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <loader supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='type'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>rom</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>pflash</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='readonly'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>yes</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>no</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='secure'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>yes</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>no</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </loader>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  </os>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <cpu>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <mode name='host-passthrough' supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='hostPassthroughMigratable'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>on</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>off</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </mode>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <mode name='maximum' supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='maximumMigratable'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>on</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>off</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </mode>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <mode name='host-model' supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <vendor>AMD</vendor>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='x2apic'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='tsc-deadline'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='hypervisor'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='tsc_adjust'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='spec-ctrl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='stibp'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='arch-capabilities'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='ssbd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='cmp_legacy'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='overflow-recov'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='succor'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='ibrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='amd-ssbd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='virt-ssbd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='lbrv'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='tsc-scale'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='vmcb-clean'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='flushbyasid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='pause-filter'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='pfthreshold'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='svme-addr-chk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='rdctl-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='mds-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='pschange-mc-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='gds-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='rfds-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='disable' name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </mode>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <mode name='custom' supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-noTSX'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server-v5'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cooperlake'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cooperlake-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cooperlake-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Denverton'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mpx'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Denverton-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mpx'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Denverton-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Denverton-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Dhyana-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Genoa'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amd-psfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='auto-ibrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='no-nested-data-bp'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='null-sel-clr-base'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='stibp-always-on'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Genoa-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amd-psfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='auto-ibrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='no-nested-data-bp'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='null-sel-clr-base'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='stibp-always-on'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Milan'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Milan-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Milan-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amd-psfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='no-nested-data-bp'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='null-sel-clr-base'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='stibp-always-on'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Rome'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Rome-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Rome-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Rome-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='GraniteRapids'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mcdt-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pbrsb-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='prefetchiti'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='GraniteRapids-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mcdt-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pbrsb-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='prefetchiti'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='GraniteRapids-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx10'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx10-128'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx10-256'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx10-512'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mcdt-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pbrsb-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='prefetchiti'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-noTSX'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-noTSX'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v5'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v6'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v7'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='IvyBridge'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='IvyBridge-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='IvyBridge-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='IvyBridge-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='KnightsMill'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-4fmaps'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-4vnniw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512er'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512pf'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='KnightsMill-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-4fmaps'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-4vnniw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512er'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512pf'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Opteron_G4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fma4'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xop'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Opteron_G4-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fma4'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xop'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Opteron_G5'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fma4'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tbm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xop'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Opteron_G5-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fma4'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tbm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xop'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='SapphireRapids'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='SapphireRapids-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='SapphireRapids-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='SapphireRapids-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='SierraForest'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-ne-convert'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cmpccxadd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mcdt-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pbrsb-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='SierraForest-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-ne-convert'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cmpccxadd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mcdt-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pbrsb-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-v5'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Snowridge'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='core-capability'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mpx'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='split-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Snowridge-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='core-capability'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mpx'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='split-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Snowridge-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='core-capability'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='split-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Snowridge-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='core-capability'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='split-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Snowridge-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='athlon'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnow'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnowext'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='athlon-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnow'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnowext'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='core2duo'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='core2duo-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='coreduo'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='coreduo-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='n270'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='n270-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='phenom'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnow'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnowext'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='phenom-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnow'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnowext'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </mode>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  </cpu>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <memoryBacking supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <enum name='sourceType'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <value>file</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <value>anonymous</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <value>memfd</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  </memoryBacking>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <devices>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <disk supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='diskDevice'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>disk</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>cdrom</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>floppy</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>lun</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='bus'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>fdc</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>scsi</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>usb</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>sata</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='model'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio-transitional</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio-non-transitional</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </disk>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <graphics supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='type'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>vnc</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>egl-headless</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>dbus</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </graphics>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <video supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='modelType'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>vga</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>cirrus</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>none</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>bochs</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>ramfb</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </video>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <hostdev supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='mode'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>subsystem</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='startupPolicy'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>default</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>mandatory</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>requisite</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>optional</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='subsysType'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>usb</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>pci</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>scsi</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='capsType'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='pciBackend'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </hostdev>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <rng supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='model'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio-transitional</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio-non-transitional</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='backendModel'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>random</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>egd</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>builtin</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </rng>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <filesystem supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='driverType'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>path</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>handle</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtiofs</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </filesystem>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <tpm supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='model'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>tpm-tis</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>tpm-crb</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='backendModel'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>emulator</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>external</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='backendVersion'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>2.0</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </tpm>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <redirdev supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='bus'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>usb</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </redirdev>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <channel supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='type'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>pty</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>unix</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </channel>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <crypto supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='model'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='type'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>qemu</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='backendModel'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>builtin</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </crypto>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <interface supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='backendType'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>default</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>passt</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </interface>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <panic supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='model'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>isa</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>hyperv</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </panic>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  </devices>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <features>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <gic supported='no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <vmcoreinfo supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <genid supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <backingStoreInput supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <backup supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <async-teardown supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <ps2 supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <sev supported='no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <sgx supported='no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <hyperv supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='features'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>relaxed</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>vapic</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>spinlocks</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>vpindex</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>runtime</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>synic</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>stimer</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>reset</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>vendor_id</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>frequencies</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>reenlightenment</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>tlbflush</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>ipi</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>avic</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>emsr_bitmap</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>xmm_input</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </hyperv>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <launchSecurity supported='no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  </features>
Oct 13 11:34:55 np0005485008 nova_compute[192512]: </domainCapabilities>
Oct 13 11:34:55 np0005485008 nova_compute[192512]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.371 2 DEBUG nova.virt.libvirt.host [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 13 11:34:55 np0005485008 nova_compute[192512]: <domainCapabilities>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <path>/usr/libexec/qemu-kvm</path>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <domain>kvm</domain>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <arch>x86_64</arch>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <vcpu max='240'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <iothreads supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <os supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <enum name='firmware'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <loader supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='type'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>rom</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>pflash</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='readonly'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>yes</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>no</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='secure'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>no</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </loader>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  </os>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <cpu>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <mode name='host-passthrough' supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='hostPassthroughMigratable'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>on</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>off</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </mode>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <mode name='maximum' supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='maximumMigratable'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>on</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>off</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </mode>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <mode name='host-model' supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model fallback='forbid'>EPYC-Rome</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <vendor>AMD</vendor>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='x2apic'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='tsc-deadline'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='hypervisor'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='tsc_adjust'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='spec-ctrl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='stibp'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='arch-capabilities'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='ssbd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='cmp_legacy'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='overflow-recov'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='succor'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='ibrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='amd-ssbd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='virt-ssbd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='lbrv'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='tsc-scale'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='vmcb-clean'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='flushbyasid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='pause-filter'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='pfthreshold'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='svme-addr-chk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='lfence-always-serializing'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='rdctl-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='mds-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='pschange-mc-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='gds-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='require' name='rfds-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <feature policy='disable' name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </mode>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <mode name='custom' supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-noTSX'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Broadwell-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server-noTSX'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cascadelake-Server-v5'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cooperlake'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cooperlake-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Cooperlake-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Denverton'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mpx'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Denverton-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mpx'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Denverton-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Denverton-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Dhyana-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Genoa'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amd-psfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='auto-ibrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='no-nested-data-bp'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='null-sel-clr-base'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='stibp-always-on'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Genoa-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amd-psfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='auto-ibrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='no-nested-data-bp'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='null-sel-clr-base'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='stibp-always-on'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Milan'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Milan-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Milan-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amd-psfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='no-nested-data-bp'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='null-sel-clr-base'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='stibp-always-on'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Rome'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Rome-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Rome-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-Rome-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='EPYC-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='GraniteRapids'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mcdt-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pbrsb-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='prefetchiti'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='GraniteRapids-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mcdt-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pbrsb-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='prefetchiti'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='GraniteRapids-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx10'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx10-128'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx10-256'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx10-512'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mcdt-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pbrsb-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='prefetchiti'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-noTSX'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-noTSX-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Haswell-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-noTSX'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v5'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v6'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Icelake-Server-v7'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='IvyBridge'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='IvyBridge-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='IvyBridge-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='IvyBridge-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='KnightsMill'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-4fmaps'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-4vnniw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512er'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512pf'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='KnightsMill-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-4fmaps'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-4vnniw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512er'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512pf'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Opteron_G4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fma4'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xop'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Opteron_G4-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fma4'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xop'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Opteron_G5'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fma4'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tbm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xop'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Opteron_G5-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fma4'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tbm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xop'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='SapphireRapids'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='SapphireRapids-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='SapphireRapids-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='SapphireRapids-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='amx-tile'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-bf16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-fp16'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512-vpopcntdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bitalg'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vbmi2'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrc'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fzrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='la57'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='taa-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='tsx-ldtrk'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xfd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='SierraForest'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-ne-convert'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cmpccxadd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mcdt-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pbrsb-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='SierraForest-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-ifma'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-ne-convert'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx-vnni-int8'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='bus-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cmpccxadd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fbsdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='fsrs'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ibrs-all'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mcdt-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pbrsb-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='psdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='sbdr-ssdp-no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='serialize'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vaes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='vpclmulqdq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Client-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='hle'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='rtm'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Skylake-Server-v5'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512bw'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512cd'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512dq'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512f'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='avx512vl'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='invpcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pcid'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='pku'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Snowridge'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='core-capability'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mpx'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='split-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Snowridge-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='core-capability'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='mpx'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='split-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Snowridge-v2'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='core-capability'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='split-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Snowridge-v3'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='core-capability'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='split-lock-detect'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='Snowridge-v4'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='cldemote'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='erms'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='gfni'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdir64b'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='movdiri'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='xsaves'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='athlon'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnow'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnowext'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='athlon-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnow'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnowext'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='core2duo'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='core2duo-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='coreduo'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='coreduo-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='n270'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='n270-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='ss'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='phenom'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnow'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnowext'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <blockers model='phenom-v1'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnow'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <feature name='3dnowext'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </blockers>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </mode>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  </cpu>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <memoryBacking supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <enum name='sourceType'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <value>file</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <value>anonymous</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <value>memfd</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  </memoryBacking>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <devices>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <disk supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='diskDevice'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>disk</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>cdrom</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>floppy</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>lun</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='bus'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>ide</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>fdc</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>scsi</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>usb</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>sata</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='model'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio-transitional</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio-non-transitional</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </disk>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <graphics supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='type'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>vnc</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>egl-headless</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>dbus</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </graphics>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <video supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='modelType'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>vga</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>cirrus</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>none</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>bochs</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>ramfb</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </video>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <hostdev supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='mode'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>subsystem</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='startupPolicy'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>default</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>mandatory</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>requisite</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>optional</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='subsysType'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>usb</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>pci</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>scsi</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='capsType'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='pciBackend'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </hostdev>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <rng supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='model'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio-transitional</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtio-non-transitional</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='backendModel'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>random</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>egd</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>builtin</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </rng>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <filesystem supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='driverType'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>path</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>handle</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>virtiofs</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </filesystem>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <tpm supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='model'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>tpm-tis</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>tpm-crb</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='backendModel'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>emulator</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>external</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='backendVersion'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>2.0</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </tpm>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <redirdev supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='bus'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>usb</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </redirdev>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <channel supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='type'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>pty</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>unix</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </channel>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <crypto supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='model'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='type'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>qemu</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='backendModel'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>builtin</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </crypto>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <interface supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='backendType'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>default</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>passt</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </interface>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <panic supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='model'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>isa</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>hyperv</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </panic>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  </devices>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  <features>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <gic supported='no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <vmcoreinfo supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <genid supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <backingStoreInput supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <backup supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <async-teardown supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <ps2 supported='yes'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <sev supported='no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <sgx supported='no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <hyperv supported='yes'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      <enum name='features'>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>relaxed</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>vapic</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>spinlocks</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>vpindex</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>runtime</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>synic</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>stimer</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>reset</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>vendor_id</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>frequencies</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>reenlightenment</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>tlbflush</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>ipi</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>avic</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>emsr_bitmap</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:        <value>xmm_input</value>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:      </enum>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    </hyperv>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:    <launchSecurity supported='no'/>
Oct 13 11:34:55 np0005485008 nova_compute[192512]:  </features>
Oct 13 11:34:55 np0005485008 nova_compute[192512]: </domainCapabilities>
Oct 13 11:34:55 np0005485008 nova_compute[192512]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.431 2 DEBUG nova.virt.libvirt.host [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.432 2 INFO nova.virt.libvirt.host [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Secure Boot support detected#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.434 2 INFO nova.virt.libvirt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.434 2 INFO nova.virt.libvirt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.441 2 DEBUG nova.virt.libvirt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.458 2 INFO nova.virt.node [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Determined node identity b038b2e7-0dfd-4adb-a174-3db2b96fc8ce from /var/lib/nova/compute_id#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.476 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Verified node b038b2e7-0dfd-4adb-a174-3db2b96fc8ce matches my host compute-1.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.499 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.578 2 ERROR nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Could not retrieve compute node resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce and therefore unable to error out any instances stuck in BUILDING state. Error: Failed to retrieve allocations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider 'b038b2e7-0dfd-4adb-a174-3db2b96fc8ce' not found: No resource provider with uuid b038b2e7-0dfd-4adb-a174-3db2b96fc8ce found  ", "request_id": "req-b5aba8a9-333d-4231-abe0-689f1bbf6120"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider 'b038b2e7-0dfd-4adb-a174-3db2b96fc8ce' not found: No resource provider with uuid b038b2e7-0dfd-4adb-a174-3db2b96fc8ce found  ", "request_id": "req-b5aba8a9-333d-4231-abe0-689f1bbf6120"}]}#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.603 2 DEBUG oslo_concurrency.lockutils [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.603 2 DEBUG oslo_concurrency.lockutils [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.603 2 DEBUG oslo_concurrency.lockutils [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.603 2 DEBUG nova.compute.resource_tracker [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.761 2 WARNING nova.virt.libvirt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.762 2 DEBUG nova.compute.resource_tracker [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=6179MB free_disk=73.6703872680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.762 2 DEBUG oslo_concurrency.lockutils [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:34:55 np0005485008 nova_compute[192512]: 2025-10-13 15:34:55.762 2 DEBUG oslo_concurrency.lockutils [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:34:56 np0005485008 nova_compute[192512]: 2025-10-13 15:34:56.266 2 ERROR nova.compute.resource_tracker [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Skipping removal of allocations for deleted instances: Failed to retrieve allocations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider 'b038b2e7-0dfd-4adb-a174-3db2b96fc8ce' not found: No resource provider with uuid b038b2e7-0dfd-4adb-a174-3db2b96fc8ce found  ", "request_id": "req-6f7612cc-cc24-4a18-9891-8623760da296"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider 'b038b2e7-0dfd-4adb-a174-3db2b96fc8ce' not found: No resource provider with uuid b038b2e7-0dfd-4adb-a174-3db2b96fc8ce found  ", "request_id": "req-6f7612cc-cc24-4a18-9891-8623760da296"}]}#033[00m
Oct 13 11:34:56 np0005485008 nova_compute[192512]: 2025-10-13 15:34:56.267 2 DEBUG nova.compute.resource_tracker [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:34:56 np0005485008 nova_compute[192512]: 2025-10-13 15:34:56.267 2 DEBUG nova.compute.resource_tracker [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:34:56 np0005485008 nova_compute[192512]: 2025-10-13 15:34:56.394 2 INFO nova.scheduler.client.report [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [req-9b29f916-ae81-4c24-80d3-63ce0cabf052] Created resource provider record via placement API for resource provider with UUID b038b2e7-0dfd-4adb-a174-3db2b96fc8ce and name compute-1.ctlplane.example.com.#033[00m
Oct 13 11:34:56 np0005485008 nova_compute[192512]: 2025-10-13 15:34:56.452 2 DEBUG nova.virt.libvirt.host [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct 13 11:34:56 np0005485008 nova_compute[192512]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Oct 13 11:34:56 np0005485008 nova_compute[192512]: 2025-10-13 15:34:56.452 2 INFO nova.virt.libvirt.host [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] kernel doesn't support AMD SEV#033[00m
Oct 13 11:34:56 np0005485008 nova_compute[192512]: 2025-10-13 15:34:56.453 2 DEBUG nova.compute.provider_tree [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Updating inventory in ProviderTree for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 13 11:34:56 np0005485008 nova_compute[192512]: 2025-10-13 15:34:56.453 2 DEBUG nova.virt.libvirt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 13 11:34:56 np0005485008 nova_compute[192512]: 2025-10-13 15:34:56.493 2 DEBUG nova.scheduler.client.report [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Updated inventory for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct 13 11:34:56 np0005485008 nova_compute[192512]: 2025-10-13 15:34:56.494 2 DEBUG nova.compute.provider_tree [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Updating resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct 13 11:34:56 np0005485008 nova_compute[192512]: 2025-10-13 15:34:56.494 2 DEBUG nova.compute.provider_tree [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Updating inventory in ProviderTree for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 13 11:34:56 np0005485008 nova_compute[192512]: 2025-10-13 15:34:56.593 2 DEBUG nova.compute.provider_tree [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Updating resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct 13 11:34:56 np0005485008 nova_compute[192512]: 2025-10-13 15:34:56.617 2 DEBUG nova.compute.resource_tracker [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 11:34:56 np0005485008 nova_compute[192512]: 2025-10-13 15:34:56.618 2 DEBUG oslo_concurrency.lockutils [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:34:56 np0005485008 nova_compute[192512]: 2025-10-13 15:34:56.618 2 DEBUG nova.service [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Oct 13 11:34:56 np0005485008 nova_compute[192512]: 2025-10-13 15:34:56.740 2 DEBUG nova.service [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Oct 13 11:34:56 np0005485008 nova_compute[192512]: 2025-10-13 15:34:56.741 2 DEBUG nova.servicegroup.drivers.db [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] DB_Driver: join new ServiceGroup member compute-1.ctlplane.example.com to the compute group, service = <Service: host=compute-1.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Oct 13 11:35:00 np0005485008 systemd-logind[784]: New session 29 of user zuul.
Oct 13 11:35:00 np0005485008 systemd[1]: Started Session 29 of User zuul.
Oct 13 11:35:01 np0005485008 python3.9[192964]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 13 11:35:03 np0005485008 python3.9[193120]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 11:35:03 np0005485008 systemd[1]: Reloading.
Oct 13 11:35:03 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:35:03 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:35:04 np0005485008 python3.9[193305]: ansible-ansible.builtin.service_facts Invoked
Oct 13 11:35:04 np0005485008 network[193322]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 13 11:35:04 np0005485008 network[193323]: 'network-scripts' will be removed from distribution in near future.
Oct 13 11:35:04 np0005485008 network[193324]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 13 11:35:09 np0005485008 podman[193367]: 2025-10-13 15:35:09.029415295 +0000 UTC m=+0.128188331 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 13 11:35:09 np0005485008 podman[193412]: 2025-10-13 15:35:09.438994213 +0000 UTC m=+0.098003774 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:35:10 np0005485008 podman[193441]: 2025-10-13 15:35:10.588225819 +0000 UTC m=+0.049510993 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Oct 13 11:35:12 np0005485008 podman[193667]: 2025-10-13 15:35:12.588289335 +0000 UTC m=+0.077280675 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 11:35:12 np0005485008 python3.9[193668]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:35:13 np0005485008 python3.9[193840]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:35:13 np0005485008 rsyslogd[1000]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 11:35:14 np0005485008 python3.9[193993]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:35:15 np0005485008 nova_compute[192512]: 2025-10-13 15:35:15.743 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:35:15 np0005485008 nova_compute[192512]: 2025-10-13 15:35:15.770 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:35:15 np0005485008 python3.9[194145]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:35:16 np0005485008 python3.9[194297]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 13 11:35:17 np0005485008 python3.9[194449]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 11:35:17 np0005485008 systemd[1]: Reloading.
Oct 13 11:35:17 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:35:17 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:35:18 np0005485008 python3.9[194636]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:35:19 np0005485008 python3.9[194789]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:35:20 np0005485008 python3.9[194939]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:35:21 np0005485008 python3.9[195091]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:35:22 np0005485008 python3.9[195212]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760369720.889611-252-119776324537297/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=a9bdb897f3979025d9a372b4beff53a09cbe0d55 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:35:23 np0005485008 python3.9[195364]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Oct 13 11:35:24 np0005485008 python3.9[195516]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Oct 13 11:35:25 np0005485008 python3.9[195669]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 13 11:35:26 np0005485008 python3.9[195827]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 13 11:35:28 np0005485008 python3.9[195985]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:35:28 np0005485008 python3.9[196106]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1760369727.7424035-388-120077019717383/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:35:29 np0005485008 python3.9[196256]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:35:29 np0005485008 python3.9[196377]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1760369728.927403-388-209683794345798/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:35:30 np0005485008 python3.9[196527]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:35:31 np0005485008 python3.9[196648]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1760369730.1138825-388-103056652774978/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:35:31 np0005485008 auditd[700]: Audit daemon rotating log files
Oct 13 11:35:32 np0005485008 python3.9[196798]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:35:33 np0005485008 python3.9[196950]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:35:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:35:33.935 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:35:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:35:33.937 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:35:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:35:33.937 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:35:33 np0005485008 python3.9[197102]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:35:34 np0005485008 python3.9[197223]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760369733.5001314-506-195856881685/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:35:35 np0005485008 python3.9[197373]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:35:35 np0005485008 python3.9[197449]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:35:36 np0005485008 python3.9[197599]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:35:36 np0005485008 python3.9[197720]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760369735.916682-506-12522434776470/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=17453a32c9d181134878b3e453cb84c3cd9bd67d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:35:37 np0005485008 python3.9[197870]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:35:38 np0005485008 python3.9[197991]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760369737.2566154-506-192771407869430/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:35:38 np0005485008 python3.9[198141]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:35:39 np0005485008 podman[198236]: 2025-10-13 15:35:39.49029197 +0000 UTC m=+0.093057567 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 11:35:39 np0005485008 podman[198289]: 2025-10-13 15:35:39.598333951 +0000 UTC m=+0.063777397 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2)
Oct 13 11:35:39 np0005485008 python3.9[198278]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760369738.4968617-506-29932852351311/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:35:40 np0005485008 python3.9[198456]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:35:40 np0005485008 podman[198550]: 2025-10-13 15:35:40.764278868 +0000 UTC m=+0.061307788 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 11:35:40 np0005485008 python3.9[198593]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760369739.8029773-506-38900795877556/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:35:41 np0005485008 python3.9[198743]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:35:42 np0005485008 python3.9[198864]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760369741.1241665-506-252843345871670/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:35:42 np0005485008 podman[198964]: 2025-10-13 15:35:42.821371965 +0000 UTC m=+0.108114505 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:35:43 np0005485008 python3.9[199035]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:35:43 np0005485008 python3.9[199156]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760369742.513708-506-209038872832610/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:35:44 np0005485008 python3.9[199306]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:35:44 np0005485008 python3.9[199427]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760369743.6925998-506-56027689874886/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:35:45 np0005485008 python3.9[199577]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:35:45 np0005485008 python3.9[199698]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760369744.9698532-506-270277820823442/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:35:46 np0005485008 python3.9[199848]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:35:47 np0005485008 python3.9[199969]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760369746.1536076-506-4215834400036/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:35:48 np0005485008 python3.9[200119]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:35:48 np0005485008 python3.9[200195]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:35:49 np0005485008 python3.9[200345]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:35:50 np0005485008 python3.9[200421]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:35:50 np0005485008 python3.9[200571]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:35:51 np0005485008 python3.9[200647]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:35:52 np0005485008 python3.9[200799]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:35:52 np0005485008 python3.9[200951]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:35:53 np0005485008 python3.9[201103]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:35:54 np0005485008 python3.9[201255]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:35:54 np0005485008 systemd[1]: Reloading.
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.444 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.444 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.445 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.445 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.446 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.446 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.446 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.446 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.446 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:35:54 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:35:54 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.479 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.480 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.481 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.481 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.619 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.620 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=6107MB free_disk=73.67278671264648GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.620 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.620 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:35:54 np0005485008 systemd[1]: Listening on Podman API Socket.
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.791 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.792 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.810 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.825 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.826 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 11:35:54 np0005485008 nova_compute[192512]: 2025-10-13 15:35:54.827 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:35:55 np0005485008 python3.9[201445]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:35:56 np0005485008 python3.9[201568]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760369755.0373816-950-151930422082471/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:35:57 np0005485008 python3.9[201720]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Oct 13 11:35:58 np0005485008 python3.9[201872]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 13 11:35:59 np0005485008 python3[202024]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Oct 13 11:36:01 np0005485008 podman[202037]: 2025-10-13 15:36:01.237035201 +0000 UTC m=+1.492796333 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Oct 13 11:36:01 np0005485008 podman[202133]: 2025-10-13 15:36:01.447741126 +0000 UTC m=+0.088963363 container create 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible)
Oct 13 11:36:01 np0005485008 podman[202133]: 2025-10-13 15:36:01.387403131 +0000 UTC m=+0.028625448 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Oct 13 11:36:01 np0005485008 python3[202024]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Oct 13 11:36:04 np0005485008 python3.9[202323]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:36:05 np0005485008 python3.9[202477]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:36:06 np0005485008 python3.9[202628]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760369765.734783-1056-161442030015113/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:36:07 np0005485008 python3.9[202704]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 11:36:07 np0005485008 systemd[1]: Reloading.
Oct 13 11:36:07 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:36:07 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:36:08 np0005485008 python3.9[202817]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:36:08 np0005485008 systemd[1]: Reloading.
Oct 13 11:36:08 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:36:08 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:36:08 np0005485008 systemd[1]: Starting podman_exporter container...
Oct 13 11:36:08 np0005485008 systemd[1]: Started libcrun container.
Oct 13 11:36:08 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65582732ef3c8e601cc00bc7e2557fe80b889a1cc858766f311e1736a065a498/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 13 11:36:08 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65582732ef3c8e601cc00bc7e2557fe80b889a1cc858766f311e1736a065a498/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 13 11:36:08 np0005485008 systemd[1]: Started /usr/bin/podman healthcheck run 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee.
Oct 13 11:36:08 np0005485008 podman[202857]: 2025-10-13 15:36:08.818764384 +0000 UTC m=+0.156810367 container init 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 11:36:08 np0005485008 podman_exporter[202873]: ts=2025-10-13T15:36:08.835Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Oct 13 11:36:08 np0005485008 podman_exporter[202873]: ts=2025-10-13T15:36:08.835Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Oct 13 11:36:08 np0005485008 podman_exporter[202873]: ts=2025-10-13T15:36:08.835Z caller=handler.go:94 level=info msg="enabled collectors"
Oct 13 11:36:08 np0005485008 podman_exporter[202873]: ts=2025-10-13T15:36:08.835Z caller=handler.go:105 level=info collector=container
Oct 13 11:36:08 np0005485008 podman[202857]: 2025-10-13 15:36:08.850371777 +0000 UTC m=+0.188417790 container start 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 11:36:08 np0005485008 podman[202857]: podman_exporter
Oct 13 11:36:08 np0005485008 systemd[1]: Starting Podman API Service...
Oct 13 11:36:08 np0005485008 systemd[1]: Started Podman API Service.
Oct 13 11:36:08 np0005485008 systemd[1]: Started podman_exporter container.
Oct 13 11:36:08 np0005485008 podman[202884]: time="2025-10-13T15:36:08Z" level=info msg="/usr/bin/podman filtering at log level info"
Oct 13 11:36:08 np0005485008 podman[202884]: time="2025-10-13T15:36:08Z" level=info msg="Setting parallel job count to 25"
Oct 13 11:36:08 np0005485008 podman[202884]: time="2025-10-13T15:36:08Z" level=info msg="Using sqlite as database backend"
Oct 13 11:36:08 np0005485008 podman[202884]: time="2025-10-13T15:36:08Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Oct 13 11:36:08 np0005485008 podman[202884]: time="2025-10-13T15:36:08Z" level=info msg="Using systemd socket activation to determine API endpoint"
Oct 13 11:36:08 np0005485008 podman[202884]: time="2025-10-13T15:36:08Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Oct 13 11:36:08 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:36:08 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Oct 13 11:36:08 np0005485008 podman[202884]: time="2025-10-13T15:36:08Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:36:08 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:36:08 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 16556 "" "Go-http-client/1.1"
Oct 13 11:36:08 np0005485008 podman_exporter[202873]: ts=2025-10-13T15:36:08.930Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Oct 13 11:36:08 np0005485008 podman_exporter[202873]: ts=2025-10-13T15:36:08.931Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Oct 13 11:36:08 np0005485008 podman_exporter[202873]: ts=2025-10-13T15:36:08.931Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Oct 13 11:36:08 np0005485008 podman[202883]: 2025-10-13 15:36:08.933700309 +0000 UTC m=+0.060402128 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 11:36:08 np0005485008 systemd[1]: 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee-6f64fcce99915c63.service: Main process exited, code=exited, status=1/FAILURE
Oct 13 11:36:08 np0005485008 systemd[1]: 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee-6f64fcce99915c63.service: Failed with result 'exit-code'.
Oct 13 11:36:09 np0005485008 podman[203020]: 2025-10-13 15:36:09.777277109 +0000 UTC m=+0.074623163 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:36:09 np0005485008 podman[203031]: 2025-10-13 15:36:09.898177405 +0000 UTC m=+0.180749315 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 11:36:10 np0005485008 python3.9[203108]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 11:36:10 np0005485008 systemd[1]: Stopping podman_exporter container...
Oct 13 11:36:10 np0005485008 systemd[1]: libpod-9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee.scope: Deactivated successfully.
Oct 13 11:36:10 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:36:08 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 5537 "" "Go-http-client/1.1"
Oct 13 11:36:10 np0005485008 podman[203115]: 2025-10-13 15:36:10.283729143 +0000 UTC m=+0.046913534 container died 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 11:36:10 np0005485008 systemd[1]: 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee-6f64fcce99915c63.timer: Deactivated successfully.
Oct 13 11:36:10 np0005485008 systemd[1]: Stopped /usr/bin/podman healthcheck run 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee.
Oct 13 11:36:10 np0005485008 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee-userdata-shm.mount: Deactivated successfully.
Oct 13 11:36:10 np0005485008 systemd[1]: var-lib-containers-storage-overlay-65582732ef3c8e601cc00bc7e2557fe80b889a1cc858766f311e1736a065a498-merged.mount: Deactivated successfully.
Oct 13 11:36:10 np0005485008 podman[203115]: 2025-10-13 15:36:10.445637454 +0000 UTC m=+0.208821825 container cleanup 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 11:36:10 np0005485008 podman[203115]: podman_exporter
Oct 13 11:36:10 np0005485008 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct 13 11:36:10 np0005485008 podman[203141]: podman_exporter
Oct 13 11:36:10 np0005485008 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Oct 13 11:36:10 np0005485008 systemd[1]: Stopped podman_exporter container.
Oct 13 11:36:10 np0005485008 systemd[1]: Starting podman_exporter container...
Oct 13 11:36:10 np0005485008 systemd[1]: Started libcrun container.
Oct 13 11:36:10 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65582732ef3c8e601cc00bc7e2557fe80b889a1cc858766f311e1736a065a498/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 13 11:36:10 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65582732ef3c8e601cc00bc7e2557fe80b889a1cc858766f311e1736a065a498/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 13 11:36:10 np0005485008 systemd[1]: Started /usr/bin/podman healthcheck run 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee.
Oct 13 11:36:10 np0005485008 podman[203154]: 2025-10-13 15:36:10.689587633 +0000 UTC m=+0.146939701 container init 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 11:36:10 np0005485008 podman_exporter[203170]: ts=2025-10-13T15:36:10.708Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Oct 13 11:36:10 np0005485008 podman_exporter[203170]: ts=2025-10-13T15:36:10.708Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Oct 13 11:36:10 np0005485008 podman_exporter[203170]: ts=2025-10-13T15:36:10.708Z caller=handler.go:94 level=info msg="enabled collectors"
Oct 13 11:36:10 np0005485008 podman_exporter[203170]: ts=2025-10-13T15:36:10.708Z caller=handler.go:105 level=info collector=container
Oct 13 11:36:10 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:36:10 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Oct 13 11:36:10 np0005485008 podman[202884]: time="2025-10-13T15:36:10Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:36:10 np0005485008 podman[203154]: 2025-10-13 15:36:10.722363774 +0000 UTC m=+0.179715782 container start 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 11:36:10 np0005485008 podman[203154]: podman_exporter
Oct 13 11:36:10 np0005485008 systemd[1]: Started podman_exporter container.
Oct 13 11:36:10 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:36:10 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 16558 "" "Go-http-client/1.1"
Oct 13 11:36:10 np0005485008 podman_exporter[203170]: ts=2025-10-13T15:36:10.737Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Oct 13 11:36:10 np0005485008 podman_exporter[203170]: ts=2025-10-13T15:36:10.739Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Oct 13 11:36:10 np0005485008 podman_exporter[203170]: ts=2025-10-13T15:36:10.739Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Oct 13 11:36:10 np0005485008 podman[203179]: 2025-10-13 15:36:10.800084376 +0000 UTC m=+0.065345306 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 11:36:10 np0005485008 podman[203204]: 2025-10-13 15:36:10.889619326 +0000 UTC m=+0.052628098 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 13 11:36:11 np0005485008 python3.9[203374]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:36:12 np0005485008 python3.9[203497]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760369771.3921645-1120-276340362469134/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 13 11:36:13 np0005485008 podman[203621]: 2025-10-13 15:36:13.177308648 +0000 UTC m=+0.052602567 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 13 11:36:13 np0005485008 python3.9[203667]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Oct 13 11:36:14 np0005485008 python3.9[203819]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 13 11:36:15 np0005485008 python3[203971]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Oct 13 11:36:17 np0005485008 podman[203984]: 2025-10-13 15:36:17.450552596 +0000 UTC m=+2.280248843 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Oct 13 11:36:17 np0005485008 podman[204080]: 2025-10-13 15:36:17.634199994 +0000 UTC m=+0.072592418 container create 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, name=ubi9-minimal, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, version=9.6, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 13 11:36:17 np0005485008 podman[204080]: 2025-10-13 15:36:17.594650806 +0000 UTC m=+0.033043230 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Oct 13 11:36:17 np0005485008 python3[203971]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Oct 13 11:36:19 np0005485008 python3.9[204271]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:36:20 np0005485008 python3.9[204425]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:36:21 np0005485008 python3.9[204576]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760369780.815609-1226-108287751124475/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:36:22 np0005485008 python3.9[204652]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 13 11:36:22 np0005485008 systemd[1]: Reloading.
Oct 13 11:36:22 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:36:22 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:36:23 np0005485008 python3.9[204762]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 13 11:36:23 np0005485008 systemd[1]: Reloading.
Oct 13 11:36:23 np0005485008 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 13 11:36:23 np0005485008 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 13 11:36:23 np0005485008 systemd[1]: Starting openstack_network_exporter container...
Oct 13 11:36:23 np0005485008 systemd[1]: Started libcrun container.
Oct 13 11:36:23 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aed48dbe97ccc03c9e705f8c8e784f166b13b09bea2fd9a7e21b2d75ceb0e1c/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 13 11:36:23 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aed48dbe97ccc03c9e705f8c8e784f166b13b09bea2fd9a7e21b2d75ceb0e1c/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 13 11:36:23 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aed48dbe97ccc03c9e705f8c8e784f166b13b09bea2fd9a7e21b2d75ceb0e1c/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 13 11:36:23 np0005485008 systemd[1]: Started /usr/bin/podman healthcheck run 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740.
Oct 13 11:36:23 np0005485008 podman[204803]: 2025-10-13 15:36:23.615041359 +0000 UTC m=+0.128612004 container init 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., name=ubi9-minimal, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6)
Oct 13 11:36:23 np0005485008 openstack_network_exporter[204819]: INFO    15:36:23 main.go:48: registering *bridge.Collector
Oct 13 11:36:23 np0005485008 openstack_network_exporter[204819]: INFO    15:36:23 main.go:48: registering *coverage.Collector
Oct 13 11:36:23 np0005485008 openstack_network_exporter[204819]: INFO    15:36:23 main.go:48: registering *datapath.Collector
Oct 13 11:36:23 np0005485008 openstack_network_exporter[204819]: INFO    15:36:23 main.go:48: registering *iface.Collector
Oct 13 11:36:23 np0005485008 openstack_network_exporter[204819]: INFO    15:36:23 main.go:48: registering *memory.Collector
Oct 13 11:36:23 np0005485008 openstack_network_exporter[204819]: INFO    15:36:23 main.go:48: registering *ovnnorthd.Collector
Oct 13 11:36:23 np0005485008 openstack_network_exporter[204819]: INFO    15:36:23 main.go:48: registering *ovn.Collector
Oct 13 11:36:23 np0005485008 openstack_network_exporter[204819]: INFO    15:36:23 main.go:48: registering *ovsdbserver.Collector
Oct 13 11:36:23 np0005485008 openstack_network_exporter[204819]: INFO    15:36:23 main.go:48: registering *pmd_perf.Collector
Oct 13 11:36:23 np0005485008 openstack_network_exporter[204819]: INFO    15:36:23 main.go:48: registering *pmd_rxq.Collector
Oct 13 11:36:23 np0005485008 openstack_network_exporter[204819]: INFO    15:36:23 main.go:48: registering *vswitch.Collector
Oct 13 11:36:23 np0005485008 openstack_network_exporter[204819]: NOTICE  15:36:23 main.go:76: listening on https://:9105/metrics
Oct 13 11:36:23 np0005485008 podman[204803]: 2025-10-13 15:36:23.648260265 +0000 UTC m=+0.161830900 container start 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public)
Oct 13 11:36:23 np0005485008 podman[204803]: openstack_network_exporter
Oct 13 11:36:23 np0005485008 systemd[1]: Started openstack_network_exporter container.
Oct 13 11:36:23 np0005485008 podman[204829]: 2025-10-13 15:36:23.768464608 +0000 UTC m=+0.107546899 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.buildah.version=1.33.7, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 13 11:36:24 np0005485008 python3.9[205003]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 13 11:36:24 np0005485008 systemd[1]: Stopping openstack_network_exporter container...
Oct 13 11:36:24 np0005485008 systemd[1]: libpod-8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740.scope: Deactivated successfully.
Oct 13 11:36:24 np0005485008 podman[205007]: 2025-10-13 15:36:24.688857751 +0000 UTC m=+0.053069002 container died 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, managed_by=edpm_ansible, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 13 11:36:24 np0005485008 systemd[1]: 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740-36a14ea38b224c97.timer: Deactivated successfully.
Oct 13 11:36:24 np0005485008 systemd[1]: Stopped /usr/bin/podman healthcheck run 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740.
Oct 13 11:36:24 np0005485008 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740-userdata-shm.mount: Deactivated successfully.
Oct 13 11:36:24 np0005485008 systemd[1]: var-lib-containers-storage-overlay-5aed48dbe97ccc03c9e705f8c8e784f166b13b09bea2fd9a7e21b2d75ceb0e1c-merged.mount: Deactivated successfully.
Oct 13 11:36:25 np0005485008 podman[205007]: 2025-10-13 15:36:25.280242748 +0000 UTC m=+0.644453989 container cleanup 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git)
Oct 13 11:36:25 np0005485008 podman[205007]: openstack_network_exporter
Oct 13 11:36:25 np0005485008 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Oct 13 11:36:25 np0005485008 podman[205034]: openstack_network_exporter
Oct 13 11:36:25 np0005485008 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Oct 13 11:36:25 np0005485008 systemd[1]: Stopped openstack_network_exporter container.
Oct 13 11:36:25 np0005485008 systemd[1]: Starting openstack_network_exporter container...
Oct 13 11:36:25 np0005485008 systemd[1]: Started libcrun container.
Oct 13 11:36:25 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aed48dbe97ccc03c9e705f8c8e784f166b13b09bea2fd9a7e21b2d75ceb0e1c/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 13 11:36:25 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aed48dbe97ccc03c9e705f8c8e784f166b13b09bea2fd9a7e21b2d75ceb0e1c/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Oct 13 11:36:25 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aed48dbe97ccc03c9e705f8c8e784f166b13b09bea2fd9a7e21b2d75ceb0e1c/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Oct 13 11:36:25 np0005485008 systemd[1]: Started /usr/bin/podman healthcheck run 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740.
Oct 13 11:36:25 np0005485008 podman[205047]: 2025-10-13 15:36:25.511043806 +0000 UTC m=+0.131252498 container init 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., vcs-type=git)
Oct 13 11:36:25 np0005485008 openstack_network_exporter[205063]: INFO    15:36:25 main.go:48: registering *bridge.Collector
Oct 13 11:36:25 np0005485008 openstack_network_exporter[205063]: INFO    15:36:25 main.go:48: registering *coverage.Collector
Oct 13 11:36:25 np0005485008 openstack_network_exporter[205063]: INFO    15:36:25 main.go:48: registering *datapath.Collector
Oct 13 11:36:25 np0005485008 openstack_network_exporter[205063]: INFO    15:36:25 main.go:48: registering *iface.Collector
Oct 13 11:36:25 np0005485008 openstack_network_exporter[205063]: INFO    15:36:25 main.go:48: registering *memory.Collector
Oct 13 11:36:25 np0005485008 openstack_network_exporter[205063]: INFO    15:36:25 main.go:48: registering *ovnnorthd.Collector
Oct 13 11:36:25 np0005485008 openstack_network_exporter[205063]: INFO    15:36:25 main.go:48: registering *ovn.Collector
Oct 13 11:36:25 np0005485008 openstack_network_exporter[205063]: INFO    15:36:25 main.go:48: registering *ovsdbserver.Collector
Oct 13 11:36:25 np0005485008 openstack_network_exporter[205063]: INFO    15:36:25 main.go:48: registering *pmd_perf.Collector
Oct 13 11:36:25 np0005485008 openstack_network_exporter[205063]: INFO    15:36:25 main.go:48: registering *pmd_rxq.Collector
Oct 13 11:36:25 np0005485008 openstack_network_exporter[205063]: INFO    15:36:25 main.go:48: registering *vswitch.Collector
Oct 13 11:36:25 np0005485008 openstack_network_exporter[205063]: NOTICE  15:36:25 main.go:76: listening on https://:9105/metrics
Oct 13 11:36:25 np0005485008 podman[205047]: 2025-10-13 15:36:25.539846809 +0000 UTC m=+0.160055501 container start 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vcs-type=git, release=1755695350, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., architecture=x86_64, config_id=edpm, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc.)
Oct 13 11:36:25 np0005485008 podman[205047]: openstack_network_exporter
Oct 13 11:36:25 np0005485008 systemd[1]: Started openstack_network_exporter container.
Oct 13 11:36:25 np0005485008 podman[205073]: 2025-10-13 15:36:25.618203541 +0000 UTC m=+0.069147277 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git)
Oct 13 11:36:26 np0005485008 python3.9[205245]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 13 11:36:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:36:33.938 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:36:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:36:33.938 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:36:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:36:33.938 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:36:40 np0005485008 podman[205270]: 2025-10-13 15:36:40.772284017 +0000 UTC m=+0.068676024 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 11:36:40 np0005485008 podman[205271]: 2025-10-13 15:36:40.803634492 +0000 UTC m=+0.091404622 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:36:41 np0005485008 podman[205314]: 2025-10-13 15:36:41.758967124 +0000 UTC m=+0.063563299 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct 13 11:36:41 np0005485008 podman[205315]: 2025-10-13 15:36:41.781220218 +0000 UTC m=+0.071363330 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 11:36:43 np0005485008 podman[205357]: 2025-10-13 15:36:43.783455508 +0000 UTC m=+0.076725820 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, container_name=iscsid)
Oct 13 11:36:51 np0005485008 python3.9[205504]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Oct 13 11:36:51 np0005485008 python3.9[205669]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 11:36:52 np0005485008 systemd[1]: Started libpod-conmon-a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035.scope.
Oct 13 11:36:52 np0005485008 podman[205670]: 2025-10-13 15:36:52.037809478 +0000 UTC m=+0.098918331 container exec a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 11:36:52 np0005485008 podman[205670]: 2025-10-13 15:36:52.073296082 +0000 UTC m=+0.134404845 container exec_died a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 13 11:36:52 np0005485008 systemd[1]: libpod-conmon-a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035.scope: Deactivated successfully.
Oct 13 11:36:52 np0005485008 python3.9[205853]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 11:36:52 np0005485008 systemd[1]: Started libpod-conmon-a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035.scope.
Oct 13 11:36:52 np0005485008 podman[205854]: 2025-10-13 15:36:52.955687023 +0000 UTC m=+0.084831148 container exec a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller)
Oct 13 11:36:52 np0005485008 podman[205854]: 2025-10-13 15:36:52.991173557 +0000 UTC m=+0.120317682 container exec_died a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 13 11:36:53 np0005485008 systemd[1]: libpod-conmon-a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035.scope: Deactivated successfully.
Oct 13 11:36:53 np0005485008 python3.9[206037]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:36:54 np0005485008 python3.9[206189]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Oct 13 11:36:54 np0005485008 nova_compute[192512]: 2025-10-13 15:36:54.820 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:36:54 np0005485008 nova_compute[192512]: 2025-10-13 15:36:54.849 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:36:54 np0005485008 nova_compute[192512]: 2025-10-13 15:36:54.849 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:36:54 np0005485008 nova_compute[192512]: 2025-10-13 15:36:54.850 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:36:54 np0005485008 nova_compute[192512]: 2025-10-13 15:36:54.850 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:36:55 np0005485008 python3.9[206354]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 11:36:55 np0005485008 nova_compute[192512]: 2025-10-13 15:36:55.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:36:55 np0005485008 systemd[1]: Started libpod-conmon-94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225.scope.
Oct 13 11:36:55 np0005485008 nova_compute[192512]: 2025-10-13 15:36:55.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 11:36:55 np0005485008 nova_compute[192512]: 2025-10-13 15:36:55.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 11:36:55 np0005485008 podman[206355]: 2025-10-13 15:36:55.439887738 +0000 UTC m=+0.078818952 container exec 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 13 11:36:55 np0005485008 nova_compute[192512]: 2025-10-13 15:36:55.448 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 11:36:55 np0005485008 nova_compute[192512]: 2025-10-13 15:36:55.448 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:36:55 np0005485008 nova_compute[192512]: 2025-10-13 15:36:55.449 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 11:36:55 np0005485008 podman[206355]: 2025-10-13 15:36:55.470599385 +0000 UTC m=+0.109530599 container exec_died 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true)
Oct 13 11:36:55 np0005485008 systemd[1]: libpod-conmon-94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225.scope: Deactivated successfully.
Oct 13 11:36:55 np0005485008 podman[206431]: 2025-10-13 15:36:55.813795158 +0000 UTC m=+0.100418278 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 13 11:36:56 np0005485008 python3.9[206561]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 11:36:56 np0005485008 systemd[1]: Started libpod-conmon-94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225.scope.
Oct 13 11:36:56 np0005485008 podman[206562]: 2025-10-13 15:36:56.302596681 +0000 UTC m=+0.079717209 container exec 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 13 11:36:56 np0005485008 podman[206562]: 2025-10-13 15:36:56.332189703 +0000 UTC m=+0.109310221 container exec_died 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Oct 13 11:36:56 np0005485008 systemd[1]: libpod-conmon-94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225.scope: Deactivated successfully.
Oct 13 11:36:56 np0005485008 nova_compute[192512]: 2025-10-13 15:36:56.426 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:36:56 np0005485008 nova_compute[192512]: 2025-10-13 15:36:56.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:36:56 np0005485008 nova_compute[192512]: 2025-10-13 15:36:56.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:36:56 np0005485008 nova_compute[192512]: 2025-10-13 15:36:56.471 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:36:56 np0005485008 nova_compute[192512]: 2025-10-13 15:36:56.471 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:36:56 np0005485008 nova_compute[192512]: 2025-10-13 15:36:56.471 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:36:56 np0005485008 nova_compute[192512]: 2025-10-13 15:36:56.471 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:36:56 np0005485008 nova_compute[192512]: 2025-10-13 15:36:56.641 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:36:56 np0005485008 nova_compute[192512]: 2025-10-13 15:36:56.642 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=6054MB free_disk=73.50442504882812GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:36:56 np0005485008 nova_compute[192512]: 2025-10-13 15:36:56.642 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:36:56 np0005485008 nova_compute[192512]: 2025-10-13 15:36:56.642 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:36:56 np0005485008 nova_compute[192512]: 2025-10-13 15:36:56.815 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:36:56 np0005485008 nova_compute[192512]: 2025-10-13 15:36:56.815 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:36:56 np0005485008 nova_compute[192512]: 2025-10-13 15:36:56.838 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:36:56 np0005485008 nova_compute[192512]: 2025-10-13 15:36:56.857 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:36:56 np0005485008 nova_compute[192512]: 2025-10-13 15:36:56.859 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 11:36:56 np0005485008 nova_compute[192512]: 2025-10-13 15:36:56.859 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:36:57 np0005485008 python3.9[206746]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:36:57 np0005485008 python3.9[206898]: ansible-containers.podman.podman_container_info Invoked with name=['iscsid'] executable=podman
Oct 13 11:36:58 np0005485008 python3.9[207063]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=iscsid detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 11:36:58 np0005485008 systemd[1]: Started libpod-conmon-141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727.scope.
Oct 13 11:36:58 np0005485008 podman[207064]: 2025-10-13 15:36:58.741905471 +0000 UTC m=+0.080581916 container exec 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 13 11:36:58 np0005485008 podman[207064]: 2025-10-13 15:36:58.78010291 +0000 UTC m=+0.118779355 container exec_died 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 11:36:58 np0005485008 systemd[1]: libpod-conmon-141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727.scope: Deactivated successfully.
Oct 13 11:36:59 np0005485008 python3.9[207246]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=iscsid detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 11:36:59 np0005485008 systemd[1]: Started libpod-conmon-141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727.scope.
Oct 13 11:36:59 np0005485008 podman[207247]: 2025-10-13 15:36:59.620791303 +0000 UTC m=+0.076253042 container exec 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=iscsid)
Oct 13 11:36:59 np0005485008 podman[207247]: 2025-10-13 15:36:59.656791943 +0000 UTC m=+0.112253632 container exec_died 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 11:36:59 np0005485008 systemd[1]: libpod-conmon-141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727.scope: Deactivated successfully.
Oct 13 11:37:00 np0005485008 python3.9[207431]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/iscsid recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:37:01 np0005485008 python3.9[207583]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Oct 13 11:37:02 np0005485008 python3.9[207748]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 11:37:02 np0005485008 systemd[1]: Started libpod-conmon-0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e.scope.
Oct 13 11:37:02 np0005485008 podman[207749]: 2025-10-13 15:37:02.145839338 +0000 UTC m=+0.102743129 container exec 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 11:37:02 np0005485008 podman[207749]: 2025-10-13 15:37:02.175866214 +0000 UTC m=+0.132770025 container exec_died 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 11:37:02 np0005485008 systemd[1]: libpod-conmon-0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e.scope: Deactivated successfully.
Oct 13 11:37:02 np0005485008 python3.9[207933]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 11:37:03 np0005485008 systemd[1]: Started libpod-conmon-0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e.scope.
Oct 13 11:37:03 np0005485008 podman[207934]: 2025-10-13 15:37:03.053604741 +0000 UTC m=+0.076317345 container exec 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 11:37:03 np0005485008 podman[207934]: 2025-10-13 15:37:03.083942896 +0000 UTC m=+0.106655500 container exec_died 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251009, container_name=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd)
Oct 13 11:37:03 np0005485008 systemd[1]: libpod-conmon-0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e.scope: Deactivated successfully.
Oct 13 11:37:03 np0005485008 python3.9[208116]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:37:04 np0005485008 python3.9[208268]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Oct 13 11:37:05 np0005485008 python3.9[208434]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 11:37:05 np0005485008 systemd[1]: Started libpod-conmon-9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee.scope.
Oct 13 11:37:05 np0005485008 podman[208435]: 2025-10-13 15:37:05.449972847 +0000 UTC m=+0.089745298 container exec 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 11:37:05 np0005485008 podman[208435]: 2025-10-13 15:37:05.485938036 +0000 UTC m=+0.125710467 container exec_died 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 11:37:05 np0005485008 systemd[1]: libpod-conmon-9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee.scope: Deactivated successfully.
Oct 13 11:37:06 np0005485008 python3.9[208619]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 11:37:06 np0005485008 systemd[1]: Started libpod-conmon-9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee.scope.
Oct 13 11:37:06 np0005485008 podman[208620]: 2025-10-13 15:37:06.32951534 +0000 UTC m=+0.095784175 container exec 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 11:37:06 np0005485008 podman[208620]: 2025-10-13 15:37:06.363079305 +0000 UTC m=+0.129348140 container exec_died 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 11:37:06 np0005485008 systemd[1]: libpod-conmon-9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee.scope: Deactivated successfully.
Oct 13 11:37:07 np0005485008 python3.9[208803]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:37:07 np0005485008 python3.9[208955]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Oct 13 11:37:08 np0005485008 python3.9[209120]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 11:37:08 np0005485008 systemd[1]: Started libpod-conmon-8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740.scope.
Oct 13 11:37:08 np0005485008 podman[209121]: 2025-10-13 15:37:08.904732741 +0000 UTC m=+0.082330670 container exec 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, vendor=Red Hat, Inc., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41)
Oct 13 11:37:08 np0005485008 podman[209121]: 2025-10-13 15:37:08.937978716 +0000 UTC m=+0.115576625 container exec_died 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.buildah.version=1.33.7, io.openshift.expose-services=, version=9.6, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 13 11:37:08 np0005485008 systemd[1]: libpod-conmon-8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740.scope: Deactivated successfully.
Oct 13 11:37:09 np0005485008 python3.9[209305]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Oct 13 11:37:09 np0005485008 systemd[1]: Started libpod-conmon-8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740.scope.
Oct 13 11:37:09 np0005485008 podman[209306]: 2025-10-13 15:37:09.785509021 +0000 UTC m=+0.068703229 container exec 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_id=edpm, vcs-type=git, version=9.6, name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container)
Oct 13 11:37:09 np0005485008 podman[209306]: 2025-10-13 15:37:09.816788596 +0000 UTC m=+0.099982794 container exec_died 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, version=9.6, io.buildah.version=1.33.7, io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container)
Oct 13 11:37:09 np0005485008 systemd[1]: libpod-conmon-8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740.scope: Deactivated successfully.
Oct 13 11:37:10 np0005485008 python3.9[209489]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:37:11 np0005485008 podman[209613]: 2025-10-13 15:37:11.254322145 +0000 UTC m=+0.091870514 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 13 11:37:11 np0005485008 podman[209614]: 2025-10-13 15:37:11.308591948 +0000 UTC m=+0.146604331 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:37:11 np0005485008 python3.9[209665]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:37:11 np0005485008 podman[209810]: 2025-10-13 15:37:11.982030465 +0000 UTC m=+0.083822295 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 11:37:11 np0005485008 podman[209809]: 2025-10-13 15:37:11.985308407 +0000 UTC m=+0.087454308 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 13 11:37:12 np0005485008 python3.9[209879]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:37:12 np0005485008 python3.9[210002]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760369831.6143441-1722-45028576849916/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:37:13 np0005485008 python3.9[210154]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:37:14 np0005485008 podman[210278]: 2025-10-13 15:37:14.282609718 +0000 UTC m=+0.061988832 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid)
Oct 13 11:37:14 np0005485008 python3.9[210326]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:37:14 np0005485008 python3.9[210405]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:37:15 np0005485008 python3.9[210557]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:37:16 np0005485008 python3.9[210635]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.qvr9x_p0 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:37:17 np0005485008 python3.9[210787]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:37:17 np0005485008 python3.9[210865]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:37:18 np0005485008 python3.9[211017]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:37:19 np0005485008 python3[211170]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 13 11:37:19 np0005485008 python3.9[211322]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:37:20 np0005485008 python3.9[211400]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:37:21 np0005485008 python3.9[211552]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:37:21 np0005485008 python3.9[211630]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:37:22 np0005485008 python3.9[211782]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:37:22 np0005485008 python3.9[211860]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:37:23 np0005485008 python3.9[212012]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:37:24 np0005485008 python3.9[212090]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:37:25 np0005485008 python3.9[212242]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 13 11:37:25 np0005485008 python3.9[212367]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760369844.6521456-1972-276070590573562/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:37:26 np0005485008 podman[212491]: 2025-10-13 15:37:26.486027142 +0000 UTC m=+0.087457548 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, io.openshift.expose-services=, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git)
Oct 13 11:37:26 np0005485008 python3.9[212536]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:37:27 np0005485008 python3.9[212692]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:37:28 np0005485008 python3.9[212847]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:37:29 np0005485008 python3.9[212999]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:37:30 np0005485008 python3.9[213152]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 13 11:37:30 np0005485008 python3.9[213306]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 13 11:37:31 np0005485008 python3.9[213461]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 13 11:37:32 np0005485008 systemd[1]: session-29.scope: Deactivated successfully.
Oct 13 11:37:32 np0005485008 systemd[1]: session-29.scope: Consumed 1min 29.126s CPU time.
Oct 13 11:37:32 np0005485008 systemd-logind[784]: Session 29 logged out. Waiting for processes to exit.
Oct 13 11:37:32 np0005485008 systemd-logind[784]: Removed session 29.
Oct 13 11:37:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:37:33.939 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:37:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:37:33.940 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:37:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:37:33.940 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:37:35 np0005485008 podman[202884]: time="2025-10-13T15:37:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:37:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:37:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:37:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:37:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2964 "" "Go-http-client/1.1"
Oct 13 11:37:41 np0005485008 podman[213489]: 2025-10-13 15:37:41.750632497 +0000 UTC m=+0.052758639 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 13 11:37:41 np0005485008 podman[213490]: 2025-10-13 15:37:41.796729308 +0000 UTC m=+0.094017170 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 13 11:37:42 np0005485008 podman[213533]: 2025-10-13 15:37:42.749613642 +0000 UTC m=+0.056485853 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 11:37:42 np0005485008 podman[213534]: 2025-10-13 15:37:42.786610533 +0000 UTC m=+0.077769560 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 11:37:44 np0005485008 podman[213576]: 2025-10-13 15:37:44.798571866 +0000 UTC m=+0.098162459 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 11:37:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:37:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:37:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:37:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:37:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:37:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:37:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:37:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:37:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:37:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:37:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:37:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:37:54 np0005485008 nova_compute[192512]: 2025-10-13 15:37:54.859 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:37:54 np0005485008 nova_compute[192512]: 2025-10-13 15:37:54.860 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:37:55 np0005485008 nova_compute[192512]: 2025-10-13 15:37:55.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:37:56 np0005485008 nova_compute[192512]: 2025-10-13 15:37:56.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:37:56 np0005485008 podman[213601]: 2025-10-13 15:37:56.77907215 +0000 UTC m=+0.074734456 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter)
Oct 13 11:37:57 np0005485008 nova_compute[192512]: 2025-10-13 15:37:57.426 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:37:57 np0005485008 nova_compute[192512]: 2025-10-13 15:37:57.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:37:57 np0005485008 nova_compute[192512]: 2025-10-13 15:37:57.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 11:37:57 np0005485008 nova_compute[192512]: 2025-10-13 15:37:57.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 11:37:57 np0005485008 nova_compute[192512]: 2025-10-13 15:37:57.454 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 11:37:57 np0005485008 nova_compute[192512]: 2025-10-13 15:37:57.454 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:37:57 np0005485008 nova_compute[192512]: 2025-10-13 15:37:57.455 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:37:57 np0005485008 nova_compute[192512]: 2025-10-13 15:37:57.455 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 11:37:58 np0005485008 nova_compute[192512]: 2025-10-13 15:37:58.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:37:58 np0005485008 nova_compute[192512]: 2025-10-13 15:37:58.451 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:37:58 np0005485008 nova_compute[192512]: 2025-10-13 15:37:58.452 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:37:58 np0005485008 nova_compute[192512]: 2025-10-13 15:37:58.452 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:37:58 np0005485008 nova_compute[192512]: 2025-10-13 15:37:58.452 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:37:58 np0005485008 nova_compute[192512]: 2025-10-13 15:37:58.620 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:37:58 np0005485008 nova_compute[192512]: 2025-10-13 15:37:58.622 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=6101MB free_disk=73.5043716430664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:37:58 np0005485008 nova_compute[192512]: 2025-10-13 15:37:58.622 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:37:58 np0005485008 nova_compute[192512]: 2025-10-13 15:37:58.622 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:37:58 np0005485008 nova_compute[192512]: 2025-10-13 15:37:58.675 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:37:58 np0005485008 nova_compute[192512]: 2025-10-13 15:37:58.675 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:37:58 np0005485008 nova_compute[192512]: 2025-10-13 15:37:58.704 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:37:58 np0005485008 nova_compute[192512]: 2025-10-13 15:37:58.715 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:37:58 np0005485008 nova_compute[192512]: 2025-10-13 15:37:58.717 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 11:37:58 np0005485008 nova_compute[192512]: 2025-10-13 15:37:58.717 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:38:05 np0005485008 podman[202884]: time="2025-10-13T15:38:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:38:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:38:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:38:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:38:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2975 "" "Go-http-client/1.1"
Oct 13 11:38:12 np0005485008 podman[213624]: 2025-10-13 15:38:12.767345258 +0000 UTC m=+0.070887216 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd)
Oct 13 11:38:12 np0005485008 podman[213625]: 2025-10-13 15:38:12.830588606 +0000 UTC m=+0.117457745 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 13 11:38:12 np0005485008 podman[213657]: 2025-10-13 15:38:12.858491975 +0000 UTC m=+0.066068827 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 11:38:12 np0005485008 podman[213685]: 2025-10-13 15:38:12.923914911 +0000 UTC m=+0.066852662 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 11:38:15 np0005485008 podman[213714]: 2025-10-13 15:38:15.764420896 +0000 UTC m=+0.062442094 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 11:38:27 np0005485008 podman[213733]: 2025-10-13 15:38:27.763406063 +0000 UTC m=+0.062440854 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal)
Oct 13 11:38:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:38:30.963 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:38:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:38:30.964 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 11:38:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:38:30.965 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:38:32 np0005485008 systemd[1]: packagekit.service: Deactivated successfully.
Oct 13 11:38:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:38:33.940 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:38:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:38:33.942 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:38:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:38:33.943 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:38:43 np0005485008 podman[213756]: 2025-10-13 15:38:43.766872884 +0000 UTC m=+0.061901407 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 11:38:43 np0005485008 podman[213755]: 2025-10-13 15:38:43.772819998 +0000 UTC m=+0.076117158 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd)
Oct 13 11:38:43 np0005485008 podman[213762]: 2025-10-13 15:38:43.803206514 +0000 UTC m=+0.091709045 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 11:38:43 np0005485008 podman[213763]: 2025-10-13 15:38:43.813192215 +0000 UTC m=+0.093623825 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:38:46 np0005485008 podman[213841]: 2025-10-13 15:38:46.773365414 +0000 UTC m=+0.073964633 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 13 11:38:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:38:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:38:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:38:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:38:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:38:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:38:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:38:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:38:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:38:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:38:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:38:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:38:54 np0005485008 nova_compute[192512]: 2025-10-13 15:38:54.718 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:38:55 np0005485008 nova_compute[192512]: 2025-10-13 15:38:55.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:38:55 np0005485008 nova_compute[192512]: 2025-10-13 15:38:55.444 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:38:57 np0005485008 nova_compute[192512]: 2025-10-13 15:38:57.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:38:57 np0005485008 nova_compute[192512]: 2025-10-13 15:38:57.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:38:57 np0005485008 nova_compute[192512]: 2025-10-13 15:38:57.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:38:57 np0005485008 nova_compute[192512]: 2025-10-13 15:38:57.430 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:38:57 np0005485008 nova_compute[192512]: 2025-10-13 15:38:57.430 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 11:38:58 np0005485008 nova_compute[192512]: 2025-10-13 15:38:58.425 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:38:58 np0005485008 nova_compute[192512]: 2025-10-13 15:38:58.426 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:38:58 np0005485008 nova_compute[192512]: 2025-10-13 15:38:58.426 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 11:38:58 np0005485008 nova_compute[192512]: 2025-10-13 15:38:58.426 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 11:38:58 np0005485008 nova_compute[192512]: 2025-10-13 15:38:58.440 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 11:38:58 np0005485008 podman[213863]: 2025-10-13 15:38:58.783392646 +0000 UTC m=+0.076619195 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, release=1755695350, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, version=9.6, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 13 11:38:59 np0005485008 nova_compute[192512]: 2025-10-13 15:38:59.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:38:59 np0005485008 nova_compute[192512]: 2025-10-13 15:38:59.487 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:38:59 np0005485008 nova_compute[192512]: 2025-10-13 15:38:59.487 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:38:59 np0005485008 nova_compute[192512]: 2025-10-13 15:38:59.487 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:38:59 np0005485008 nova_compute[192512]: 2025-10-13 15:38:59.487 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:38:59 np0005485008 nova_compute[192512]: 2025-10-13 15:38:59.667 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:38:59 np0005485008 nova_compute[192512]: 2025-10-13 15:38:59.669 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=6156MB free_disk=73.5043716430664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:38:59 np0005485008 nova_compute[192512]: 2025-10-13 15:38:59.670 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:38:59 np0005485008 nova_compute[192512]: 2025-10-13 15:38:59.670 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:38:59 np0005485008 nova_compute[192512]: 2025-10-13 15:38:59.729 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:38:59 np0005485008 nova_compute[192512]: 2025-10-13 15:38:59.729 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:38:59 np0005485008 nova_compute[192512]: 2025-10-13 15:38:59.746 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:38:59 np0005485008 nova_compute[192512]: 2025-10-13 15:38:59.765 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:38:59 np0005485008 nova_compute[192512]: 2025-10-13 15:38:59.767 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 11:38:59 np0005485008 nova_compute[192512]: 2025-10-13 15:38:59.768 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:39:05 np0005485008 podman[202884]: time="2025-10-13T15:39:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:39:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:39:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:39:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:39:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2980 "" "Go-http-client/1.1"
Oct 13 11:39:14 np0005485008 podman[213885]: 2025-10-13 15:39:14.771629721 +0000 UTC m=+0.067633852 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:39:14 np0005485008 podman[213886]: 2025-10-13 15:39:14.777566869 +0000 UTC m=+0.071385084 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 11:39:14 np0005485008 podman[213884]: 2025-10-13 15:39:14.777760645 +0000 UTC m=+0.079686104 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 13 11:39:14 np0005485008 podman[213887]: 2025-10-13 15:39:14.823317862 +0000 UTC m=+0.108280922 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:39:17 np0005485008 podman[213969]: 2025-10-13 15:39:17.75961339 +0000 UTC m=+0.065396193 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Oct 13 11:39:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:39:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:39:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:39:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:39:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:39:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:39:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:39:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:39:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:39:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:39:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:39:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:39:29 np0005485008 podman[213990]: 2025-10-13 15:39:29.760277496 +0000 UTC m=+0.060339193 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, name=ubi9-minimal, version=9.6, managed_by=edpm_ansible, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=edpm, release=1755695350, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 11:39:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:39:33.941 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:39:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:39:33.943 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:39:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:39:33.943 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:39:35 np0005485008 podman[202884]: time="2025-10-13T15:39:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:39:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:39:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:39:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:39:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2979 "" "Go-http-client/1.1"
Oct 13 11:39:45 np0005485008 podman[214011]: 2025-10-13 15:39:45.757144546 +0000 UTC m=+0.051068425 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 11:39:45 np0005485008 podman[214012]: 2025-10-13 15:39:45.777374463 +0000 UTC m=+0.062218889 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 11:39:45 np0005485008 podman[214010]: 2025-10-13 15:39:45.777853997 +0000 UTC m=+0.069024763 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 11:39:45 np0005485008 podman[214018]: 2025-10-13 15:39:45.804119345 +0000 UTC m=+0.081861358 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:39:48 np0005485008 podman[214093]: 2025-10-13 15:39:48.780711794 +0000 UTC m=+0.075321952 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 11:39:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:39:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:39:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:39:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:39:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:39:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:39:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:39:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:39:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:39:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:39:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:39:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:39:54 np0005485008 nova_compute[192512]: 2025-10-13 15:39:54.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:39:54 np0005485008 nova_compute[192512]: 2025-10-13 15:39:54.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:39:54 np0005485008 nova_compute[192512]: 2025-10-13 15:39:54.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 13 11:39:54 np0005485008 nova_compute[192512]: 2025-10-13 15:39:54.456 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 13 11:39:54 np0005485008 nova_compute[192512]: 2025-10-13 15:39:54.458 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:39:54 np0005485008 nova_compute[192512]: 2025-10-13 15:39:54.459 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 13 11:39:54 np0005485008 nova_compute[192512]: 2025-10-13 15:39:54.481 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:39:57 np0005485008 nova_compute[192512]: 2025-10-13 15:39:57.506 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:39:57 np0005485008 nova_compute[192512]: 2025-10-13 15:39:57.507 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:39:57 np0005485008 nova_compute[192512]: 2025-10-13 15:39:57.508 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 11:39:58 np0005485008 nova_compute[192512]: 2025-10-13 15:39:58.425 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:39:58 np0005485008 nova_compute[192512]: 2025-10-13 15:39:58.426 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:39:58 np0005485008 nova_compute[192512]: 2025-10-13 15:39:58.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:39:59 np0005485008 nova_compute[192512]: 2025-10-13 15:39:59.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:39:59 np0005485008 nova_compute[192512]: 2025-10-13 15:39:59.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:39:59 np0005485008 nova_compute[192512]: 2025-10-13 15:39:59.471 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:39:59 np0005485008 nova_compute[192512]: 2025-10-13 15:39:59.472 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:39:59 np0005485008 nova_compute[192512]: 2025-10-13 15:39:59.472 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:39:59 np0005485008 nova_compute[192512]: 2025-10-13 15:39:59.472 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:39:59 np0005485008 nova_compute[192512]: 2025-10-13 15:39:59.646 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:39:59 np0005485008 nova_compute[192512]: 2025-10-13 15:39:59.647 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=6178MB free_disk=73.5043716430664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:39:59 np0005485008 nova_compute[192512]: 2025-10-13 15:39:59.648 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:39:59 np0005485008 nova_compute[192512]: 2025-10-13 15:39:59.648 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:39:59 np0005485008 nova_compute[192512]: 2025-10-13 15:39:59.769 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:39:59 np0005485008 nova_compute[192512]: 2025-10-13 15:39:59.770 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:39:59 np0005485008 nova_compute[192512]: 2025-10-13 15:39:59.839 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing inventories for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 13 11:39:59 np0005485008 nova_compute[192512]: 2025-10-13 15:39:59.905 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating ProviderTree inventory for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 13 11:39:59 np0005485008 nova_compute[192512]: 2025-10-13 15:39:59.906 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating inventory in ProviderTree for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 13 11:39:59 np0005485008 nova_compute[192512]: 2025-10-13 15:39:59.925 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing aggregate associations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 13 11:39:59 np0005485008 nova_compute[192512]: 2025-10-13 15:39:59.979 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing trait associations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce, traits: HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,HW_CPU_X86_SSE42,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,HW_CPU_X86_BMI2,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 13 11:40:00 np0005485008 nova_compute[192512]: 2025-10-13 15:40:00.022 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:40:00 np0005485008 nova_compute[192512]: 2025-10-13 15:40:00.035 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:40:00 np0005485008 nova_compute[192512]: 2025-10-13 15:40:00.037 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 11:40:00 np0005485008 nova_compute[192512]: 2025-10-13 15:40:00.037 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:40:00 np0005485008 podman[214114]: 2025-10-13 15:40:00.785270935 +0000 UTC m=+0.082378605 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, config_id=edpm, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, managed_by=edpm_ansible, vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container)
Oct 13 11:40:01 np0005485008 nova_compute[192512]: 2025-10-13 15:40:01.038 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:40:01 np0005485008 nova_compute[192512]: 2025-10-13 15:40:01.038 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 11:40:01 np0005485008 nova_compute[192512]: 2025-10-13 15:40:01.038 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 11:40:01 np0005485008 nova_compute[192512]: 2025-10-13 15:40:01.084 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 11:40:05 np0005485008 podman[202884]: time="2025-10-13T15:40:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:40:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:40:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:40:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:40:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2985 "" "Go-http-client/1.1"
Oct 13 11:40:16 np0005485008 podman[214138]: 2025-10-13 15:40:16.77740749 +0000 UTC m=+0.066061664 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 11:40:16 np0005485008 podman[214137]: 2025-10-13 15:40:16.77772685 +0000 UTC m=+0.067346384 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, managed_by=edpm_ansible)
Oct 13 11:40:16 np0005485008 podman[214136]: 2025-10-13 15:40:16.797281722 +0000 UTC m=+0.092876390 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:40:16 np0005485008 podman[214139]: 2025-10-13 15:40:16.801818293 +0000 UTC m=+0.085504574 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 11:40:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:40:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:40:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:40:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:40:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:40:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:40:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:40:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:40:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:40:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:40:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:40:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:40:19 np0005485008 podman[214222]: 2025-10-13 15:40:19.75989565 +0000 UTC m=+0.063063862 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:40:31 np0005485008 podman[214242]: 2025-10-13 15:40:31.776875289 +0000 UTC m=+0.072191685 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, build-date=2025-08-20T13:12:41, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, architecture=x86_64)
Oct 13 11:40:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:40:33.943 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:40:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:40:33.943 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:40:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:40:33.944 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:40:35 np0005485008 podman[202884]: time="2025-10-13T15:40:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:40:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:40:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:40:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:40:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2977 "" "Go-http-client/1.1"
Oct 13 11:40:47 np0005485008 podman[214263]: 2025-10-13 15:40:47.772159763 +0000 UTC m=+0.073272507 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251009)
Oct 13 11:40:47 np0005485008 podman[214265]: 2025-10-13 15:40:47.773586717 +0000 UTC m=+0.067623883 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 11:40:47 np0005485008 podman[214264]: 2025-10-13 15:40:47.773728721 +0000 UTC m=+0.067545870 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 13 11:40:47 np0005485008 podman[214266]: 2025-10-13 15:40:47.810038559 +0000 UTC m=+0.099736432 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, tcib_managed=true)
Oct 13 11:40:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:40:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:40:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:40:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:40:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:40:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:40:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:40:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:40:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:40:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:40:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:40:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:40:50 np0005485008 podman[214349]: 2025-10-13 15:40:50.775751091 +0000 UTC m=+0.068941783 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, io.buildah.version=1.41.3)
Oct 13 11:40:54 np0005485008 nova_compute[192512]: 2025-10-13 15:40:54.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:40:58 np0005485008 nova_compute[192512]: 2025-10-13 15:40:58.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:40:58 np0005485008 nova_compute[192512]: 2025-10-13 15:40:58.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 11:40:59 np0005485008 nova_compute[192512]: 2025-10-13 15:40:59.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:40:59 np0005485008 nova_compute[192512]: 2025-10-13 15:40:59.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:41:00 np0005485008 nova_compute[192512]: 2025-10-13 15:41:00.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:41:00 np0005485008 nova_compute[192512]: 2025-10-13 15:41:00.424 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:41:00 np0005485008 nova_compute[192512]: 2025-10-13 15:41:00.462 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:41:01 np0005485008 nova_compute[192512]: 2025-10-13 15:41:01.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:41:01 np0005485008 nova_compute[192512]: 2025-10-13 15:41:01.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:41:01 np0005485008 nova_compute[192512]: 2025-10-13 15:41:01.459 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:41:01 np0005485008 nova_compute[192512]: 2025-10-13 15:41:01.459 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:41:01 np0005485008 nova_compute[192512]: 2025-10-13 15:41:01.460 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:41:01 np0005485008 nova_compute[192512]: 2025-10-13 15:41:01.460 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:41:01 np0005485008 nova_compute[192512]: 2025-10-13 15:41:01.619 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:41:01 np0005485008 nova_compute[192512]: 2025-10-13 15:41:01.620 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=6190MB free_disk=73.5044059753418GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:41:01 np0005485008 nova_compute[192512]: 2025-10-13 15:41:01.620 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:41:01 np0005485008 nova_compute[192512]: 2025-10-13 15:41:01.620 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:41:01 np0005485008 nova_compute[192512]: 2025-10-13 15:41:01.714 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:41:01 np0005485008 nova_compute[192512]: 2025-10-13 15:41:01.715 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:41:01 np0005485008 nova_compute[192512]: 2025-10-13 15:41:01.740 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:41:01 np0005485008 nova_compute[192512]: 2025-10-13 15:41:01.777 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:41:01 np0005485008 nova_compute[192512]: 2025-10-13 15:41:01.779 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 11:41:01 np0005485008 nova_compute[192512]: 2025-10-13 15:41:01.779 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:41:02 np0005485008 podman[214370]: 2025-10-13 15:41:02.768705509 +0000 UTC m=+0.064876498 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, release=1755695350, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.7, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_id=edpm)
Oct 13 11:41:02 np0005485008 nova_compute[192512]: 2025-10-13 15:41:02.779 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:41:02 np0005485008 nova_compute[192512]: 2025-10-13 15:41:02.780 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 11:41:02 np0005485008 nova_compute[192512]: 2025-10-13 15:41:02.780 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 11:41:02 np0005485008 nova_compute[192512]: 2025-10-13 15:41:02.798 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 11:41:05 np0005485008 podman[202884]: time="2025-10-13T15:41:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:41:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:41:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:41:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:41:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2985 "" "Go-http-client/1.1"
Oct 13 11:41:18 np0005485008 podman[214393]: 2025-10-13 15:41:18.769198023 +0000 UTC m=+0.057846592 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 13 11:41:18 np0005485008 podman[214392]: 2025-10-13 15:41:18.795678528 +0000 UTC m=+0.088926489 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=multipathd)
Oct 13 11:41:18 np0005485008 podman[214394]: 2025-10-13 15:41:18.801780625 +0000 UTC m=+0.084900914 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 11:41:18 np0005485008 podman[214395]: 2025-10-13 15:41:18.811443773 +0000 UTC m=+0.091713694 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:41:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:41:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:41:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:41:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:41:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:41:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:41:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:41:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:41:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:41:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:41:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:41:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:41:21 np0005485008 podman[214476]: 2025-10-13 15:41:21.760429422 +0000 UTC m=+0.063489296 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 11:41:33 np0005485008 podman[214496]: 2025-10-13 15:41:33.790167172 +0000 UTC m=+0.088232938 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, name=ubi9-minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7)
Oct 13 11:41:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:41:33.944 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:41:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:41:33.944 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:41:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:41:33.944 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:41:35 np0005485008 podman[202884]: time="2025-10-13T15:41:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:41:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:41:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:41:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:41:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2979 "" "Go-http-client/1.1"
Oct 13 11:41:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:41:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:41:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:41:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:41:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:41:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:41:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:41:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:41:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:41:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:41:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:41:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:41:49 np0005485008 podman[214517]: 2025-10-13 15:41:49.762307432 +0000 UTC m=+0.061277847 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 11:41:49 np0005485008 podman[214519]: 2025-10-13 15:41:49.773426514 +0000 UTC m=+0.062967499 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 11:41:49 np0005485008 podman[214518]: 2025-10-13 15:41:49.78139521 +0000 UTC m=+0.069884922 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 13 11:41:49 np0005485008 podman[214520]: 2025-10-13 15:41:49.803104748 +0000 UTC m=+0.089382562 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:41:52 np0005485008 podman[214603]: 2025-10-13 15:41:52.776569651 +0000 UTC m=+0.075711052 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009)
Oct 13 11:41:56 np0005485008 nova_compute[192512]: 2025-10-13 15:41:56.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:41:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:41:59.312 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:41:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:41:59.313 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 11:41:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:41:59.313 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:41:59 np0005485008 nova_compute[192512]: 2025-10-13 15:41:59.435 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:41:59 np0005485008 nova_compute[192512]: 2025-10-13 15:41:59.435 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 11:42:00 np0005485008 nova_compute[192512]: 2025-10-13 15:42:00.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:42:01 np0005485008 nova_compute[192512]: 2025-10-13 15:42:01.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:42:01 np0005485008 nova_compute[192512]: 2025-10-13 15:42:01.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:42:01 np0005485008 nova_compute[192512]: 2025-10-13 15:42:01.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:42:02 np0005485008 nova_compute[192512]: 2025-10-13 15:42:02.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:42:02 np0005485008 nova_compute[192512]: 2025-10-13 15:42:02.430 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 11:42:02 np0005485008 nova_compute[192512]: 2025-10-13 15:42:02.430 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 11:42:02 np0005485008 nova_compute[192512]: 2025-10-13 15:42:02.461 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 11:42:02 np0005485008 nova_compute[192512]: 2025-10-13 15:42:02.461 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:42:03 np0005485008 nova_compute[192512]: 2025-10-13 15:42:03.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:42:03 np0005485008 nova_compute[192512]: 2025-10-13 15:42:03.467 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:42:03 np0005485008 nova_compute[192512]: 2025-10-13 15:42:03.468 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:42:03 np0005485008 nova_compute[192512]: 2025-10-13 15:42:03.468 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:42:03 np0005485008 nova_compute[192512]: 2025-10-13 15:42:03.468 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:42:03 np0005485008 nova_compute[192512]: 2025-10-13 15:42:03.624 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:42:03 np0005485008 nova_compute[192512]: 2025-10-13 15:42:03.625 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=6192MB free_disk=73.5044059753418GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:42:03 np0005485008 nova_compute[192512]: 2025-10-13 15:42:03.626 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:42:03 np0005485008 nova_compute[192512]: 2025-10-13 15:42:03.626 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:42:03 np0005485008 nova_compute[192512]: 2025-10-13 15:42:03.684 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:42:03 np0005485008 nova_compute[192512]: 2025-10-13 15:42:03.685 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:42:03 np0005485008 nova_compute[192512]: 2025-10-13 15:42:03.707 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:42:03 np0005485008 nova_compute[192512]: 2025-10-13 15:42:03.722 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:42:03 np0005485008 nova_compute[192512]: 2025-10-13 15:42:03.724 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 11:42:03 np0005485008 nova_compute[192512]: 2025-10-13 15:42:03.724 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:42:04 np0005485008 podman[214623]: 2025-10-13 15:42:04.75371572 +0000 UTC m=+0.058744519 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, version=9.6, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, container_name=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 11:42:05 np0005485008 podman[202884]: time="2025-10-13T15:42:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:42:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:42:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:42:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:42:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2982 "" "Go-http-client/1.1"
Oct 13 11:42:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:42:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:42:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:42:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:42:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:42:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:42:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:42:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:42:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:42:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:42:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:42:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:42:20 np0005485008 podman[214646]: 2025-10-13 15:42:20.796428641 +0000 UTC m=+0.081243396 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 13 11:42:20 np0005485008 podman[214647]: 2025-10-13 15:42:20.809492775 +0000 UTC m=+0.095947670 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 11:42:20 np0005485008 podman[214645]: 2025-10-13 15:42:20.810123265 +0000 UTC m=+0.101708710 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 11:42:20 np0005485008 podman[214648]: 2025-10-13 15:42:20.830504676 +0000 UTC m=+0.107120867 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 13 11:42:23 np0005485008 podman[214726]: 2025-10-13 15:42:23.767045147 +0000 UTC m=+0.067887483 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 13 11:42:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:33.946 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:42:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:33.946 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:42:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:33.946 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:42:35 np0005485008 podman[202884]: time="2025-10-13T15:42:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:42:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:42:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:42:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:42:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2978 "" "Go-http-client/1.1"
Oct 13 11:42:35 np0005485008 podman[214749]: 2025-10-13 15:42:35.788570458 +0000 UTC m=+0.081841105 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, version=9.6, config_id=edpm, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, vcs-type=git, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc.)
Oct 13 11:42:48 np0005485008 nova_compute[192512]: 2025-10-13 15:42:48.200 2 DEBUG oslo_concurrency.lockutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:42:48 np0005485008 nova_compute[192512]: 2025-10-13 15:42:48.201 2 DEBUG oslo_concurrency.lockutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:42:48 np0005485008 nova_compute[192512]: 2025-10-13 15:42:48.214 2 DEBUG nova.compute.manager [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 13 11:42:48 np0005485008 nova_compute[192512]: 2025-10-13 15:42:48.284 2 DEBUG oslo_concurrency.lockutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:42:48 np0005485008 nova_compute[192512]: 2025-10-13 15:42:48.284 2 DEBUG oslo_concurrency.lockutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:42:48 np0005485008 nova_compute[192512]: 2025-10-13 15:42:48.290 2 DEBUG nova.virt.hardware [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 13 11:42:48 np0005485008 nova_compute[192512]: 2025-10-13 15:42:48.291 2 INFO nova.compute.claims [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct 13 11:42:48 np0005485008 nova_compute[192512]: 2025-10-13 15:42:48.400 2 DEBUG nova.compute.provider_tree [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:42:48 np0005485008 nova_compute[192512]: 2025-10-13 15:42:48.417 2 DEBUG nova.scheduler.client.report [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:42:48 np0005485008 nova_compute[192512]: 2025-10-13 15:42:48.441 2 DEBUG oslo_concurrency.lockutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:42:48 np0005485008 nova_compute[192512]: 2025-10-13 15:42:48.442 2 DEBUG nova.compute.manager [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 13 11:42:48 np0005485008 nova_compute[192512]: 2025-10-13 15:42:48.481 2 DEBUG nova.compute.manager [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 13 11:42:48 np0005485008 nova_compute[192512]: 2025-10-13 15:42:48.482 2 DEBUG nova.network.neutron [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 13 11:42:48 np0005485008 nova_compute[192512]: 2025-10-13 15:42:48.503 2 INFO nova.virt.libvirt.driver [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 13 11:42:48 np0005485008 nova_compute[192512]: 2025-10-13 15:42:48.519 2 DEBUG nova.compute.manager [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 13 11:42:48 np0005485008 nova_compute[192512]: 2025-10-13 15:42:48.599 2 DEBUG nova.compute.manager [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 13 11:42:48 np0005485008 nova_compute[192512]: 2025-10-13 15:42:48.600 2 DEBUG nova.virt.libvirt.driver [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 13 11:42:48 np0005485008 nova_compute[192512]: 2025-10-13 15:42:48.601 2 INFO nova.virt.libvirt.driver [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Creating image(s)#033[00m
Oct 13 11:42:48 np0005485008 nova_compute[192512]: 2025-10-13 15:42:48.602 2 DEBUG oslo_concurrency.lockutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "/var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:42:48 np0005485008 nova_compute[192512]: 2025-10-13 15:42:48.602 2 DEBUG oslo_concurrency.lockutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "/var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:42:48 np0005485008 nova_compute[192512]: 2025-10-13 15:42:48.603 2 DEBUG oslo_concurrency.lockutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "/var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:42:48 np0005485008 nova_compute[192512]: 2025-10-13 15:42:48.603 2 DEBUG oslo_concurrency.lockutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:42:48 np0005485008 nova_compute[192512]: 2025-10-13 15:42:48.604 2 DEBUG oslo_concurrency.lockutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:42:49 np0005485008 nova_compute[192512]: 2025-10-13 15:42:49.096 2 WARNING oslo_policy.policy [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct 13 11:42:49 np0005485008 nova_compute[192512]: 2025-10-13 15:42:49.097 2 WARNING oslo_policy.policy [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct 13 11:42:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:42:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:42:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:42:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:42:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:42:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:42:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:42:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:42:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:42:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:42:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:42:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:42:49 np0005485008 nova_compute[192512]: 2025-10-13 15:42:49.635 2 DEBUG nova.network.neutron [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Successfully created port: 6b295aff-c137-4935-b9a4-2b8d088fb4f6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 13 11:42:50 np0005485008 nova_compute[192512]: 2025-10-13 15:42:50.088 2 DEBUG oslo_concurrency.processutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:42:50 np0005485008 nova_compute[192512]: 2025-10-13 15:42:50.179 2 DEBUG oslo_concurrency.processutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7.part --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:42:50 np0005485008 nova_compute[192512]: 2025-10-13 15:42:50.181 2 DEBUG nova.virt.images [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] dcd9fbd3-16ab-46e1-976e-0576b433c9d5 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct 13 11:42:50 np0005485008 nova_compute[192512]: 2025-10-13 15:42:50.182 2 DEBUG nova.privsep.utils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct 13 11:42:50 np0005485008 nova_compute[192512]: 2025-10-13 15:42:50.182 2 DEBUG oslo_concurrency.processutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7.part /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:42:50 np0005485008 nova_compute[192512]: 2025-10-13 15:42:50.653 2 DEBUG oslo_concurrency.processutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7.part /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7.converted" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:42:50 np0005485008 nova_compute[192512]: 2025-10-13 15:42:50.662 2 DEBUG oslo_concurrency.processutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:42:50 np0005485008 nova_compute[192512]: 2025-10-13 15:42:50.763 2 DEBUG oslo_concurrency.processutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7.converted --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:42:50 np0005485008 nova_compute[192512]: 2025-10-13 15:42:50.765 2 DEBUG oslo_concurrency.lockutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:42:50 np0005485008 nova_compute[192512]: 2025-10-13 15:42:50.781 2 INFO oslo.privsep.daemon [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpla4pq4pc/privsep.sock']#033[00m
Oct 13 11:42:50 np0005485008 nova_compute[192512]: 2025-10-13 15:42:50.866 2 DEBUG nova.network.neutron [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Successfully updated port: 6b295aff-c137-4935-b9a4-2b8d088fb4f6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 13 11:42:50 np0005485008 nova_compute[192512]: 2025-10-13 15:42:50.884 2 DEBUG oslo_concurrency.lockutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "refresh_cache-ab52c277-77ae-4d69-b9c9-74f1c5c5fa92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:42:50 np0005485008 nova_compute[192512]: 2025-10-13 15:42:50.884 2 DEBUG oslo_concurrency.lockutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquired lock "refresh_cache-ab52c277-77ae-4d69-b9c9-74f1c5c5fa92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:42:50 np0005485008 nova_compute[192512]: 2025-10-13 15:42:50.884 2 DEBUG nova.network.neutron [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.070 2 DEBUG nova.network.neutron [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.324 2 DEBUG nova.compute.manager [req-5e0a0362-dc21-42ce-9e6e-f053343d2f5f req-24342e54-fed0-4ab4-ac63-58ce74f2f6b9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Received event network-changed-6b295aff-c137-4935-b9a4-2b8d088fb4f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.325 2 DEBUG nova.compute.manager [req-5e0a0362-dc21-42ce-9e6e-f053343d2f5f req-24342e54-fed0-4ab4-ac63-58ce74f2f6b9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Refreshing instance network info cache due to event network-changed-6b295aff-c137-4935-b9a4-2b8d088fb4f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.325 2 DEBUG oslo_concurrency.lockutils [req-5e0a0362-dc21-42ce-9e6e-f053343d2f5f req-24342e54-fed0-4ab4-ac63-58ce74f2f6b9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-ab52c277-77ae-4d69-b9c9-74f1c5c5fa92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.535 2 INFO oslo.privsep.daemon [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.386 57 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.392 57 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.394 57 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.394 57 INFO oslo.privsep.daemon [-] privsep daemon running as pid 57#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.622 2 DEBUG oslo_concurrency.processutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.707 2 DEBUG oslo_concurrency.processutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.708 2 DEBUG oslo_concurrency.lockutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.709 2 DEBUG oslo_concurrency.lockutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.721 2 DEBUG oslo_concurrency.processutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:42:51 np0005485008 podman[214795]: 2025-10-13 15:42:51.782093084 +0000 UTC m=+0.071162215 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.786 2 DEBUG oslo_concurrency.processutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.788 2 DEBUG oslo_concurrency.processutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:42:51 np0005485008 podman[214796]: 2025-10-13 15:42:51.801720471 +0000 UTC m=+0.096227890 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 11:42:51 np0005485008 podman[214794]: 2025-10-13 15:42:51.804674973 +0000 UTC m=+0.108168870 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 11:42:51 np0005485008 podman[214801]: 2025-10-13 15:42:51.820559564 +0000 UTC m=+0.111297416 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.840 2 DEBUG oslo_concurrency.processutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk 1073741824" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.841 2 DEBUG oslo_concurrency.lockutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.842 2 DEBUG oslo_concurrency.processutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.897 2 DEBUG oslo_concurrency.processutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.899 2 DEBUG nova.virt.disk.api [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Checking if we can resize image /var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.899 2 DEBUG oslo_concurrency.processutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.977 2 DEBUG oslo_concurrency.processutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.978 2 DEBUG nova.virt.disk.api [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Cannot resize image /var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.979 2 DEBUG nova.objects.instance [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lazy-loading 'migration_context' on Instance uuid ab52c277-77ae-4d69-b9c9-74f1c5c5fa92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.995 2 DEBUG nova.virt.libvirt.driver [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.995 2 DEBUG nova.virt.libvirt.driver [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Ensure instance console log exists: /var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.996 2 DEBUG oslo_concurrency.lockutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.996 2 DEBUG oslo_concurrency.lockutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:42:51 np0005485008 nova_compute[192512]: 2025-10-13 15:42:51.996 2 DEBUG oslo_concurrency.lockutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.350 2 DEBUG nova.network.neutron [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Updating instance_info_cache with network_info: [{"id": "6b295aff-c137-4935-b9a4-2b8d088fb4f6", "address": "fa:16:3e:61:93:5c", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b295aff-c1", "ovs_interfaceid": "6b295aff-c137-4935-b9a4-2b8d088fb4f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.376 2 DEBUG oslo_concurrency.lockutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Releasing lock "refresh_cache-ab52c277-77ae-4d69-b9c9-74f1c5c5fa92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.376 2 DEBUG nova.compute.manager [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Instance network_info: |[{"id": "6b295aff-c137-4935-b9a4-2b8d088fb4f6", "address": "fa:16:3e:61:93:5c", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b295aff-c1", "ovs_interfaceid": "6b295aff-c137-4935-b9a4-2b8d088fb4f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.377 2 DEBUG oslo_concurrency.lockutils [req-5e0a0362-dc21-42ce-9e6e-f053343d2f5f req-24342e54-fed0-4ab4-ac63-58ce74f2f6b9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-ab52c277-77ae-4d69-b9c9-74f1c5c5fa92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.378 2 DEBUG nova.network.neutron [req-5e0a0362-dc21-42ce-9e6e-f053343d2f5f req-24342e54-fed0-4ab4-ac63-58ce74f2f6b9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Refreshing network info cache for port 6b295aff-c137-4935-b9a4-2b8d088fb4f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.380 2 DEBUG nova.virt.libvirt.driver [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Start _get_guest_xml network_info=[{"id": "6b295aff-c137-4935-b9a4-2b8d088fb4f6", "address": "fa:16:3e:61:93:5c", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b295aff-c1", "ovs_interfaceid": "6b295aff-c137-4935-b9a4-2b8d088fb4f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'boot_index': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': 'dcd9fbd3-16ab-46e1-976e-0576b433c9d5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.386 2 WARNING nova.virt.libvirt.driver [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.392 2 DEBUG nova.virt.libvirt.host [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.393 2 DEBUG nova.virt.libvirt.host [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.400 2 DEBUG nova.virt.libvirt.host [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.401 2 DEBUG nova.virt.libvirt.host [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.402 2 DEBUG nova.virt.libvirt.driver [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.402 2 DEBUG nova.virt.hardware [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-13T15:39:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ddff5c88-cac9-460e-8ffb-1e9b9a7c2a59',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.403 2 DEBUG nova.virt.hardware [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.403 2 DEBUG nova.virt.hardware [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.404 2 DEBUG nova.virt.hardware [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.404 2 DEBUG nova.virt.hardware [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.404 2 DEBUG nova.virt.hardware [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.405 2 DEBUG nova.virt.hardware [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.405 2 DEBUG nova.virt.hardware [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.406 2 DEBUG nova.virt.hardware [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.406 2 DEBUG nova.virt.hardware [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.406 2 DEBUG nova.virt.hardware [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.412 2 DEBUG nova.privsep.utils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.413 2 DEBUG nova.virt.libvirt.vif [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T15:42:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-235079186',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-235079186',id=2,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de23aa1f8b1f466e8bfa712e3140ce54',ramdisk_id='',reservation_id='r-eofhtcw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteActionsViaActuator-836873667',owner_user_name='tempest-TestExecuteActionsViaActuator-836873667-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:42:48Z,user_data=None,user_id='4732dfe3d815487f863c441d326f4231',uuid=ab52c277-77ae-4d69-b9c9-74f1c5c5fa92,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b295aff-c137-4935-b9a4-2b8d088fb4f6", "address": "fa:16:3e:61:93:5c", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b295aff-c1", "ovs_interfaceid": "6b295aff-c137-4935-b9a4-2b8d088fb4f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.414 2 DEBUG nova.network.os_vif_util [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Converting VIF {"id": "6b295aff-c137-4935-b9a4-2b8d088fb4f6", "address": "fa:16:3e:61:93:5c", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b295aff-c1", "ovs_interfaceid": "6b295aff-c137-4935-b9a4-2b8d088fb4f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.415 2 DEBUG nova.network.os_vif_util [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:93:5c,bridge_name='br-int',has_traffic_filtering=True,id=6b295aff-c137-4935-b9a4-2b8d088fb4f6,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b295aff-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.418 2 DEBUG nova.objects.instance [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lazy-loading 'pci_devices' on Instance uuid ab52c277-77ae-4d69-b9c9-74f1c5c5fa92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.438 2 DEBUG nova.virt.libvirt.driver [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] End _get_guest_xml xml=<domain type="kvm">
Oct 13 11:42:52 np0005485008 nova_compute[192512]:  <uuid>ab52c277-77ae-4d69-b9c9-74f1c5c5fa92</uuid>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:  <name>instance-00000002</name>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:  <memory>131072</memory>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:  <vcpu>1</vcpu>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:  <metadata>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <nova:name>tempest-TestExecuteActionsViaActuator-server-235079186</nova:name>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <nova:creationTime>2025-10-13 15:42:52</nova:creationTime>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <nova:flavor name="m1.nano">
Oct 13 11:42:52 np0005485008 nova_compute[192512]:        <nova:memory>128</nova:memory>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:        <nova:disk>1</nova:disk>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:        <nova:swap>0</nova:swap>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:        <nova:ephemeral>0</nova:ephemeral>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:        <nova:vcpus>1</nova:vcpus>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      </nova:flavor>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <nova:owner>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:        <nova:user uuid="4732dfe3d815487f863c441d326f4231">tempest-TestExecuteActionsViaActuator-836873667-project-admin</nova:user>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:        <nova:project uuid="de23aa1f8b1f466e8bfa712e3140ce54">tempest-TestExecuteActionsViaActuator-836873667</nova:project>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      </nova:owner>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <nova:root type="image" uuid="dcd9fbd3-16ab-46e1-976e-0576b433c9d5"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <nova:ports>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:        <nova:port uuid="6b295aff-c137-4935-b9a4-2b8d088fb4f6">
Oct 13 11:42:52 np0005485008 nova_compute[192512]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:        </nova:port>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      </nova:ports>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    </nova:instance>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:  </metadata>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:  <sysinfo type="smbios">
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <system>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <entry name="manufacturer">RDO</entry>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <entry name="product">OpenStack Compute</entry>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <entry name="serial">ab52c277-77ae-4d69-b9c9-74f1c5c5fa92</entry>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <entry name="uuid">ab52c277-77ae-4d69-b9c9-74f1c5c5fa92</entry>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <entry name="family">Virtual Machine</entry>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    </system>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:  </sysinfo>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:  <os>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <boot dev="hd"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <smbios mode="sysinfo"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:  </os>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:  <features>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <acpi/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <apic/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <vmcoreinfo/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:  </features>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:  <clock offset="utc">
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <timer name="pit" tickpolicy="delay"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <timer name="hpet" present="no"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:  </clock>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:  <cpu mode="host-model" match="exact">
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <topology sockets="1" cores="1" threads="1"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:  </cpu>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:  <devices>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <disk type="file" device="disk">
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <driver name="qemu" type="qcow2" cache="none"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <target dev="vda" bus="virtio"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    </disk>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <disk type="file" device="cdrom">
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <driver name="qemu" type="raw" cache="none"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk.config"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <target dev="sda" bus="sata"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    </disk>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <interface type="ethernet">
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <mac address="fa:16:3e:61:93:5c"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <driver name="vhost" rx_queue_size="512"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <mtu size="1442"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <target dev="tap6b295aff-c1"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    </interface>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <serial type="pty">
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <log file="/var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/console.log" append="off"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    </serial>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <video>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    </video>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <input type="tablet" bus="usb"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <rng model="virtio">
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <backend model="random">/dev/urandom</backend>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    </rng>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <controller type="usb" index="0"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    <memballoon model="virtio">
Oct 13 11:42:52 np0005485008 nova_compute[192512]:      <stats period="10"/>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:    </memballoon>
Oct 13 11:42:52 np0005485008 nova_compute[192512]:  </devices>
Oct 13 11:42:52 np0005485008 nova_compute[192512]: </domain>
Oct 13 11:42:52 np0005485008 nova_compute[192512]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.439 2 DEBUG nova.compute.manager [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Preparing to wait for external event network-vif-plugged-6b295aff-c137-4935-b9a4-2b8d088fb4f6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.440 2 DEBUG oslo_concurrency.lockutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.440 2 DEBUG oslo_concurrency.lockutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.440 2 DEBUG oslo_concurrency.lockutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.441 2 DEBUG nova.virt.libvirt.vif [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T15:42:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-235079186',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-235079186',id=2,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de23aa1f8b1f466e8bfa712e3140ce54',ramdisk_id='',reservation_id='r-eofhtcw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteActionsViaActuator-836873667',owner_user_name='tempest-TestExecuteActionsViaActuator-836873667-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:42:48Z,user_data=None,user_id='4732dfe3d815487f863c441d326f4231',uuid=ab52c277-77ae-4d69-b9c9-74f1c5c5fa92,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b295aff-c137-4935-b9a4-2b8d088fb4f6", "address": "fa:16:3e:61:93:5c", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b295aff-c1", "ovs_interfaceid": "6b295aff-c137-4935-b9a4-2b8d088fb4f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.441 2 DEBUG nova.network.os_vif_util [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Converting VIF {"id": "6b295aff-c137-4935-b9a4-2b8d088fb4f6", "address": "fa:16:3e:61:93:5c", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b295aff-c1", "ovs_interfaceid": "6b295aff-c137-4935-b9a4-2b8d088fb4f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.442 2 DEBUG nova.network.os_vif_util [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:93:5c,bridge_name='br-int',has_traffic_filtering=True,id=6b295aff-c137-4935-b9a4-2b8d088fb4f6,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b295aff-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.443 2 DEBUG os_vif [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:93:5c,bridge_name='br-int',has_traffic_filtering=True,id=6b295aff-c137-4935-b9a4-2b8d088fb4f6,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b295aff-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.477 2 DEBUG ovsdbapp.backend.ovs_idl [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.478 2 DEBUG ovsdbapp.backend.ovs_idl [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.478 2 DEBUG ovsdbapp.backend.ovs_idl [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.492 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.493 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:42:52 np0005485008 nova_compute[192512]: 2025-10-13 15:42:52.494 2 INFO oslo.privsep.daemon [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpl61wkfqv/privsep.sock']#033[00m
Oct 13 11:42:53 np0005485008 nova_compute[192512]: 2025-10-13 15:42:53.263 2 INFO oslo.privsep.daemon [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct 13 11:42:53 np0005485008 nova_compute[192512]: 2025-10-13 15:42:53.096 78 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 13 11:42:53 np0005485008 nova_compute[192512]: 2025-10-13 15:42:53.099 78 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 13 11:42:53 np0005485008 nova_compute[192512]: 2025-10-13 15:42:53.101 78 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Oct 13 11:42:53 np0005485008 nova_compute[192512]: 2025-10-13 15:42:53.101 78 INFO oslo.privsep.daemon [-] privsep daemon running as pid 78#033[00m
Oct 13 11:42:53 np0005485008 nova_compute[192512]: 2025-10-13 15:42:53.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:42:53 np0005485008 nova_compute[192512]: 2025-10-13 15:42:53.588 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b295aff-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:42:53 np0005485008 nova_compute[192512]: 2025-10-13 15:42:53.591 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6b295aff-c1, col_values=(('external_ids', {'iface-id': '6b295aff-c137-4935-b9a4-2b8d088fb4f6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:93:5c', 'vm-uuid': 'ab52c277-77ae-4d69-b9c9-74f1c5c5fa92'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:42:53 np0005485008 nova_compute[192512]: 2025-10-13 15:42:53.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:42:53 np0005485008 NetworkManager[51587]: <info>  [1760370173.5949] manager: (tap6b295aff-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Oct 13 11:42:53 np0005485008 nova_compute[192512]: 2025-10-13 15:42:53.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 11:42:53 np0005485008 nova_compute[192512]: 2025-10-13 15:42:53.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:42:53 np0005485008 nova_compute[192512]: 2025-10-13 15:42:53.602 2 INFO os_vif [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:93:5c,bridge_name='br-int',has_traffic_filtering=True,id=6b295aff-c137-4935-b9a4-2b8d088fb4f6,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b295aff-c1')#033[00m
Oct 13 11:42:53 np0005485008 nova_compute[192512]: 2025-10-13 15:42:53.700 2 DEBUG nova.virt.libvirt.driver [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 11:42:53 np0005485008 nova_compute[192512]: 2025-10-13 15:42:53.700 2 DEBUG nova.virt.libvirt.driver [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 11:42:53 np0005485008 nova_compute[192512]: 2025-10-13 15:42:53.701 2 DEBUG nova.virt.libvirt.driver [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] No VIF found with MAC fa:16:3e:61:93:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 13 11:42:53 np0005485008 nova_compute[192512]: 2025-10-13 15:42:53.701 2 INFO nova.virt.libvirt.driver [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Using config drive#033[00m
Oct 13 11:42:54 np0005485008 nova_compute[192512]: 2025-10-13 15:42:54.247 2 INFO nova.virt.libvirt.driver [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Creating config drive at /var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk.config#033[00m
Oct 13 11:42:54 np0005485008 nova_compute[192512]: 2025-10-13 15:42:54.257 2 DEBUG oslo_concurrency.processutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpadxaa_pp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:42:54 np0005485008 nova_compute[192512]: 2025-10-13 15:42:54.347 2 DEBUG nova.network.neutron [req-5e0a0362-dc21-42ce-9e6e-f053343d2f5f req-24342e54-fed0-4ab4-ac63-58ce74f2f6b9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Updated VIF entry in instance network info cache for port 6b295aff-c137-4935-b9a4-2b8d088fb4f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 13 11:42:54 np0005485008 nova_compute[192512]: 2025-10-13 15:42:54.349 2 DEBUG nova.network.neutron [req-5e0a0362-dc21-42ce-9e6e-f053343d2f5f req-24342e54-fed0-4ab4-ac63-58ce74f2f6b9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Updating instance_info_cache with network_info: [{"id": "6b295aff-c137-4935-b9a4-2b8d088fb4f6", "address": "fa:16:3e:61:93:5c", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b295aff-c1", "ovs_interfaceid": "6b295aff-c137-4935-b9a4-2b8d088fb4f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:42:54 np0005485008 nova_compute[192512]: 2025-10-13 15:42:54.378 2 DEBUG oslo_concurrency.lockutils [req-5e0a0362-dc21-42ce-9e6e-f053343d2f5f req-24342e54-fed0-4ab4-ac63-58ce74f2f6b9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-ab52c277-77ae-4d69-b9c9-74f1c5c5fa92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:42:54 np0005485008 nova_compute[192512]: 2025-10-13 15:42:54.385 2 DEBUG oslo_concurrency.processutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpadxaa_pp" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:42:54 np0005485008 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct 13 11:42:54 np0005485008 kernel: tap6b295aff-c1: entered promiscuous mode
Oct 13 11:42:54 np0005485008 NetworkManager[51587]: <info>  [1760370174.4962] manager: (tap6b295aff-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/23)
Oct 13 11:42:54 np0005485008 ovn_controller[94758]: 2025-10-13T15:42:54Z|00027|binding|INFO|Claiming lport 6b295aff-c137-4935-b9a4-2b8d088fb4f6 for this chassis.
Oct 13 11:42:54 np0005485008 ovn_controller[94758]: 2025-10-13T15:42:54Z|00028|binding|INFO|6b295aff-c137-4935-b9a4-2b8d088fb4f6: Claiming fa:16:3e:61:93:5c 10.100.0.12
Oct 13 11:42:54 np0005485008 nova_compute[192512]: 2025-10-13 15:42:54.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:42:54 np0005485008 nova_compute[192512]: 2025-10-13 15:42:54.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:42:54 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:54.532 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:93:5c 10.100.0.12'], port_security=['fa:16:3e:61:93:5c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'ab52c277-77ae-4d69-b9c9-74f1c5c5fa92', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de23aa1f8b1f466e8bfa712e3140ce54', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2246fb33-460f-432b-aa37-27ea09f6fda6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c73c3f50-1be2-4143-b9e3-a5e32e8515a9, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=6b295aff-c137-4935-b9a4-2b8d088fb4f6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:42:54 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:54.534 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 6b295aff-c137-4935-b9a4-2b8d088fb4f6 in datapath 844f3185-3b42-4e49-9ef5-690ae5e238a0 bound to our chassis#033[00m
Oct 13 11:42:54 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:54.536 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 844f3185-3b42-4e49-9ef5-690ae5e238a0#033[00m
Oct 13 11:42:54 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:54.538 103642 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpedgna51a/privsep.sock']#033[00m
Oct 13 11:42:54 np0005485008 systemd-udevd[214938]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:42:54 np0005485008 podman[214912]: 2025-10-13 15:42:54.56504889 +0000 UTC m=+0.089464421 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible)
Oct 13 11:42:54 np0005485008 NetworkManager[51587]: <info>  [1760370174.5692] device (tap6b295aff-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 11:42:54 np0005485008 NetworkManager[51587]: <info>  [1760370174.5700] device (tap6b295aff-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 11:42:54 np0005485008 nova_compute[192512]: 2025-10-13 15:42:54.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:42:54 np0005485008 systemd-machined[152551]: New machine qemu-1-instance-00000002.
Oct 13 11:42:54 np0005485008 ovn_controller[94758]: 2025-10-13T15:42:54Z|00029|binding|INFO|Setting lport 6b295aff-c137-4935-b9a4-2b8d088fb4f6 ovn-installed in OVS
Oct 13 11:42:54 np0005485008 ovn_controller[94758]: 2025-10-13T15:42:54Z|00030|binding|INFO|Setting lport 6b295aff-c137-4935-b9a4-2b8d088fb4f6 up in Southbound
Oct 13 11:42:54 np0005485008 nova_compute[192512]: 2025-10-13 15:42:54.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:42:54 np0005485008 systemd[1]: Started Virtual Machine qemu-1-instance-00000002.
Oct 13 11:42:54 np0005485008 nova_compute[192512]: 2025-10-13 15:42:54.768 2 DEBUG nova.compute.manager [req-9b3cefc7-be41-4cbf-90a7-50b188c0c9f6 req-9e2a0640-bebb-4cb6-9190-ade34b7e50fe 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Received event network-vif-plugged-6b295aff-c137-4935-b9a4-2b8d088fb4f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:42:54 np0005485008 nova_compute[192512]: 2025-10-13 15:42:54.770 2 DEBUG oslo_concurrency.lockutils [req-9b3cefc7-be41-4cbf-90a7-50b188c0c9f6 req-9e2a0640-bebb-4cb6-9190-ade34b7e50fe 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:42:54 np0005485008 nova_compute[192512]: 2025-10-13 15:42:54.770 2 DEBUG oslo_concurrency.lockutils [req-9b3cefc7-be41-4cbf-90a7-50b188c0c9f6 req-9e2a0640-bebb-4cb6-9190-ade34b7e50fe 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:42:54 np0005485008 nova_compute[192512]: 2025-10-13 15:42:54.771 2 DEBUG oslo_concurrency.lockutils [req-9b3cefc7-be41-4cbf-90a7-50b188c0c9f6 req-9e2a0640-bebb-4cb6-9190-ade34b7e50fe 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:42:54 np0005485008 nova_compute[192512]: 2025-10-13 15:42:54.771 2 DEBUG nova.compute.manager [req-9b3cefc7-be41-4cbf-90a7-50b188c0c9f6 req-9e2a0640-bebb-4cb6-9190-ade34b7e50fe 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Processing event network-vif-plugged-6b295aff-c137-4935-b9a4-2b8d088fb4f6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 13 11:42:55 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:55.222 103642 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct 13 11:42:55 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:55.222 103642 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpedgna51a/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct 13 11:42:55 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:55.075 214965 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 13 11:42:55 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:55.081 214965 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 13 11:42:55 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:55.084 214965 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Oct 13 11:42:55 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:55.084 214965 INFO oslo.privsep.daemon [-] privsep daemon running as pid 214965#033[00m
Oct 13 11:42:55 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:55.226 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[bda1392f-4d71-464b-94cf-f8a1cbdfc250]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.351 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370175.350396, ab52c277-77ae-4d69-b9c9-74f1c5c5fa92 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.351 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] VM Started (Lifecycle Event)#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.354 2 DEBUG nova.compute.manager [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.358 2 DEBUG nova.virt.libvirt.driver [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.373 2 INFO nova.virt.libvirt.driver [-] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Instance spawned successfully.#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.375 2 DEBUG nova.virt.libvirt.driver [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.377 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.381 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.410 2 DEBUG nova.virt.libvirt.driver [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.411 2 DEBUG nova.virt.libvirt.driver [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.412 2 DEBUG nova.virt.libvirt.driver [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.412 2 DEBUG nova.virt.libvirt.driver [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.413 2 DEBUG nova.virt.libvirt.driver [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.413 2 DEBUG nova.virt.libvirt.driver [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.453 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.454 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370175.3505564, ab52c277-77ae-4d69-b9c9-74f1c5c5fa92 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.454 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] VM Paused (Lifecycle Event)#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.465 2 INFO nova.compute.manager [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Took 6.87 seconds to spawn the instance on the hypervisor.#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.467 2 DEBUG nova.compute.manager [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.483 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.486 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370175.3568392, ab52c277-77ae-4d69-b9c9-74f1c5c5fa92 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.486 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] VM Resumed (Lifecycle Event)#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.516 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.520 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.552 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.560 2 INFO nova.compute.manager [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Took 7.31 seconds to build instance.#033[00m
Oct 13 11:42:55 np0005485008 nova_compute[192512]: 2025-10-13 15:42:55.583 2 DEBUG oslo_concurrency.lockutils [None req-730423a9-a0a4-4143-86b3-4dcc4c93c0a6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.383s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:42:55 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:55.776 214965 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:42:55 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:55.778 214965 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:42:55 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:55.778 214965 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:42:56 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:56.402 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f7d57a-a8c8-4c5e-8a76-d170348647a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:42:56 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:56.404 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap844f3185-31 in ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 13 11:42:56 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:56.406 214965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap844f3185-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 13 11:42:56 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:56.406 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[5355e7ae-482a-4886-abf2-69ebeb590b50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:42:56 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:56.410 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[89aa3cfd-e5c7-4dda-9aed-47be575bef72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:42:56 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:56.448 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca4a845-0471-4c3f-b5b0-95265f41d1a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:42:56 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:56.466 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[2076f5e4-3d50-4408-87a2-651cf7c7639e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:42:56 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:56.470 103642 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmph25vbsb8/privsep.sock']#033[00m
Oct 13 11:42:56 np0005485008 nova_compute[192512]: 2025-10-13 15:42:56.725 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:42:56 np0005485008 nova_compute[192512]: 2025-10-13 15:42:56.857 2 DEBUG nova.compute.manager [req-7a6e3e2f-fa2c-403b-93dd-9f6c40651d35 req-e517e7a9-8a2c-42bc-af77-b94a2890b110 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Received event network-vif-plugged-6b295aff-c137-4935-b9a4-2b8d088fb4f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:42:56 np0005485008 nova_compute[192512]: 2025-10-13 15:42:56.858 2 DEBUG oslo_concurrency.lockutils [req-7a6e3e2f-fa2c-403b-93dd-9f6c40651d35 req-e517e7a9-8a2c-42bc-af77-b94a2890b110 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:42:56 np0005485008 nova_compute[192512]: 2025-10-13 15:42:56.858 2 DEBUG oslo_concurrency.lockutils [req-7a6e3e2f-fa2c-403b-93dd-9f6c40651d35 req-e517e7a9-8a2c-42bc-af77-b94a2890b110 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:42:56 np0005485008 nova_compute[192512]: 2025-10-13 15:42:56.859 2 DEBUG oslo_concurrency.lockutils [req-7a6e3e2f-fa2c-403b-93dd-9f6c40651d35 req-e517e7a9-8a2c-42bc-af77-b94a2890b110 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:42:56 np0005485008 nova_compute[192512]: 2025-10-13 15:42:56.859 2 DEBUG nova.compute.manager [req-7a6e3e2f-fa2c-403b-93dd-9f6c40651d35 req-e517e7a9-8a2c-42bc-af77-b94a2890b110 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] No waiting events found dispatching network-vif-plugged-6b295aff-c137-4935-b9a4-2b8d088fb4f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:42:56 np0005485008 nova_compute[192512]: 2025-10-13 15:42:56.859 2 WARNING nova.compute.manager [req-7a6e3e2f-fa2c-403b-93dd-9f6c40651d35 req-e517e7a9-8a2c-42bc-af77-b94a2890b110 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Received unexpected event network-vif-plugged-6b295aff-c137-4935-b9a4-2b8d088fb4f6 for instance with vm_state active and task_state None.#033[00m
Oct 13 11:42:57 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:57.222 103642 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct 13 11:42:57 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:57.223 103642 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmph25vbsb8/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct 13 11:42:57 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:57.054 214979 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct 13 11:42:57 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:57.059 214979 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct 13 11:42:57 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:57.061 214979 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct 13 11:42:57 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:57.061 214979 INFO oslo.privsep.daemon [-] privsep daemon running as pid 214979#033[00m
Oct 13 11:42:57 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:57.226 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[0619eec2-d7b9-409b-8c71-756e7f25596a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:42:57 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:57.808 214979 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:42:57 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:57.808 214979 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:42:57 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:57.809 214979 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:58.424 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[3f61745c-2bd2-4238-9ee6-f17fe1ede4f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:58.433 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[bf071f26-0608-490a-9392-52d7df2dc642]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:42:58 np0005485008 NetworkManager[51587]: <info>  [1760370178.4369] manager: (tap844f3185-30): new Veth device (/org/freedesktop/NetworkManager/Devices/24)
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:58.458 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[6fefeb5f-76f7-453a-9021-59f7bbe993a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:58.461 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[2f17e8db-e612-4888-b1bd-d8bc68e386ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:42:58 np0005485008 systemd-udevd[214989]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:42:58 np0005485008 NetworkManager[51587]: <info>  [1760370178.4907] device (tap844f3185-30): carrier: link connected
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:58.495 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[13048a03-fd74-4c35-8b2e-dd167613a63f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:58.513 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[21cc5575-fa53-42d7-a62d-08c529483d8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap844f3185-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:dc:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 371615, 'reachable_time': 29213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 215008, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:58.532 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[091c7386-c9cf-4190-902f-4bc6ab2791a2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe80:dc07'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371615, 'tstamp': 371615}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 215009, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:58.549 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[4998df42-470d-45ac-a584-3c2436808e9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap844f3185-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:dc:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 371615, 'reachable_time': 29213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 215010, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:42:58 np0005485008 nova_compute[192512]: 2025-10-13 15:42:58.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:58.592 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[9674feab-d2ad-41f2-9dc9-69263f51f530]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:58.675 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[6377c474-06fc-4c6d-a90b-5851dfd1fb2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:58.677 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap844f3185-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:58.678 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:58.678 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap844f3185-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:42:58 np0005485008 kernel: tap844f3185-30: entered promiscuous mode
Oct 13 11:42:58 np0005485008 nova_compute[192512]: 2025-10-13 15:42:58.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:42:58 np0005485008 NetworkManager[51587]: <info>  [1760370178.6815] manager: (tap844f3185-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:58.689 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap844f3185-30, col_values=(('external_ids', {'iface-id': '94a96fe0-138f-4b15-b38e-a8f08a7e2933'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:42:58 np0005485008 nova_compute[192512]: 2025-10-13 15:42:58.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:42:58 np0005485008 ovn_controller[94758]: 2025-10-13T15:42:58Z|00031|binding|INFO|Releasing lport 94a96fe0-138f-4b15-b38e-a8f08a7e2933 from this chassis (sb_readonly=0)
Oct 13 11:42:58 np0005485008 nova_compute[192512]: 2025-10-13 15:42:58.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:58.695 103642 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/844f3185-3b42-4e49-9ef5-690ae5e238a0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/844f3185-3b42-4e49-9ef5-690ae5e238a0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:58.702 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[9b5ece46-20d6-4547-ba27-1f2997150892]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:42:58 np0005485008 nova_compute[192512]: 2025-10-13 15:42:58.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:58.707 103642 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]: global
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]:    log         /dev/log local0 debug
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]:    log-tag     haproxy-metadata-proxy-844f3185-3b42-4e49-9ef5-690ae5e238a0
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]:    user        root
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]:    group       root
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]:    maxconn     1024
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]:    pidfile     /var/lib/neutron/external/pids/844f3185-3b42-4e49-9ef5-690ae5e238a0.pid.haproxy
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]:    daemon
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]: defaults
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]:    log global
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]:    mode http
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]:    option httplog
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]:    option dontlognull
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]:    option http-server-close
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]:    option forwardfor
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]:    retries                 3
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]:    timeout http-request    30s
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]:    timeout connect         30s
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]:    timeout client          32s
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]:    timeout server          32s
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]:    timeout http-keep-alive 30s
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]: listen listener
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]:    bind 169.254.169.254:80
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]:    server metadata /var/lib/neutron/metadata_proxy
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]:    http-request add-header X-OVN-Network-ID 844f3185-3b42-4e49-9ef5-690ae5e238a0
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 13 11:42:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:42:58.708 103642 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'env', 'PROCESS_TAG=haproxy-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/844f3185-3b42-4e49-9ef5-690ae5e238a0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 13 11:42:59 np0005485008 podman[215044]: 2025-10-13 15:42:59.089603873 +0000 UTC m=+0.064028253 container create 7ccfea5ab79633100b0b91c0c8c911bb1c1656661489ffe0917eaa4451c4cf5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Oct 13 11:42:59 np0005485008 podman[215044]: 2025-10-13 15:42:59.050507383 +0000 UTC m=+0.024931753 image pull f5d8e43d5fd113645e71c7e8980543c233298a7561a4c95ab3af27cb119e70b7 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 13 11:42:59 np0005485008 systemd[1]: Started libpod-conmon-7ccfea5ab79633100b0b91c0c8c911bb1c1656661489ffe0917eaa4451c4cf5c.scope.
Oct 13 11:42:59 np0005485008 systemd[1]: Started libcrun container.
Oct 13 11:42:59 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbe390109edb036db966cdf2936d8a28c00d876708616712f75555ec95ed5174/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 11:42:59 np0005485008 podman[215044]: 2025-10-13 15:42:59.211021252 +0000 UTC m=+0.185445632 container init 7ccfea5ab79633100b0b91c0c8c911bb1c1656661489ffe0917eaa4451c4cf5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:42:59 np0005485008 podman[215044]: 2025-10-13 15:42:59.218429412 +0000 UTC m=+0.192853762 container start 7ccfea5ab79633100b0b91c0c8c911bb1c1656661489ffe0917eaa4451c4cf5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Oct 13 11:42:59 np0005485008 neutron-haproxy-ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0[215060]: [NOTICE]   (215064) : New worker (215066) forked
Oct 13 11:42:59 np0005485008 neutron-haproxy-ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0[215060]: [NOTICE]   (215064) : Loading success.
Oct 13 11:42:59 np0005485008 nova_compute[192512]: 2025-10-13 15:42:59.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:42:59 np0005485008 nova_compute[192512]: 2025-10-13 15:42:59.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 11:43:00 np0005485008 nova_compute[192512]: 2025-10-13 15:43:00.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:02 np0005485008 nova_compute[192512]: 2025-10-13 15:43:02.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:43:02 np0005485008 nova_compute[192512]: 2025-10-13 15:43:02.424 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:43:02 np0005485008 nova_compute[192512]: 2025-10-13 15:43:02.446 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:43:02 np0005485008 nova_compute[192512]: 2025-10-13 15:43:02.446 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 11:43:02 np0005485008 nova_compute[192512]: 2025-10-13 15:43:02.447 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 11:43:02 np0005485008 nova_compute[192512]: 2025-10-13 15:43:02.664 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "refresh_cache-ab52c277-77ae-4d69-b9c9-74f1c5c5fa92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:43:02 np0005485008 nova_compute[192512]: 2025-10-13 15:43:02.664 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquired lock "refresh_cache-ab52c277-77ae-4d69-b9c9-74f1c5c5fa92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:43:02 np0005485008 nova_compute[192512]: 2025-10-13 15:43:02.665 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 13 11:43:02 np0005485008 nova_compute[192512]: 2025-10-13 15:43:02.665 2 DEBUG nova.objects.instance [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ab52c277-77ae-4d69-b9c9-74f1c5c5fa92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:43:03 np0005485008 nova_compute[192512]: 2025-10-13 15:43:03.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:04 np0005485008 nova_compute[192512]: 2025-10-13 15:43:04.052 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Updating instance_info_cache with network_info: [{"id": "6b295aff-c137-4935-b9a4-2b8d088fb4f6", "address": "fa:16:3e:61:93:5c", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b295aff-c1", "ovs_interfaceid": "6b295aff-c137-4935-b9a4-2b8d088fb4f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:43:04 np0005485008 nova_compute[192512]: 2025-10-13 15:43:04.070 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Releasing lock "refresh_cache-ab52c277-77ae-4d69-b9c9-74f1c5c5fa92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:43:04 np0005485008 nova_compute[192512]: 2025-10-13 15:43:04.071 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 13 11:43:04 np0005485008 nova_compute[192512]: 2025-10-13 15:43:04.071 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:43:04 np0005485008 nova_compute[192512]: 2025-10-13 15:43:04.071 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:43:04 np0005485008 nova_compute[192512]: 2025-10-13 15:43:04.072 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:43:04 np0005485008 nova_compute[192512]: 2025-10-13 15:43:04.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:43:05 np0005485008 nova_compute[192512]: 2025-10-13 15:43:05.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:05 np0005485008 nova_compute[192512]: 2025-10-13 15:43:05.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:43:05 np0005485008 nova_compute[192512]: 2025-10-13 15:43:05.474 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:43:05 np0005485008 nova_compute[192512]: 2025-10-13 15:43:05.475 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:43:05 np0005485008 nova_compute[192512]: 2025-10-13 15:43:05.476 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:43:05 np0005485008 nova_compute[192512]: 2025-10-13 15:43:05.476 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:43:05 np0005485008 nova_compute[192512]: 2025-10-13 15:43:05.581 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:43:05 np0005485008 podman[202884]: time="2025-10-13T15:43:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:43:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:43:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 11:43:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:43:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3450 "" "Go-http-client/1.1"
Oct 13 11:43:05 np0005485008 nova_compute[192512]: 2025-10-13 15:43:05.692 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:43:05 np0005485008 nova_compute[192512]: 2025-10-13 15:43:05.693 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:43:05 np0005485008 nova_compute[192512]: 2025-10-13 15:43:05.769 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:43:05 np0005485008 nova_compute[192512]: 2025-10-13 15:43:05.919 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:43:05 np0005485008 nova_compute[192512]: 2025-10-13 15:43:05.921 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5677MB free_disk=73.46931457519531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:43:05 np0005485008 nova_compute[192512]: 2025-10-13 15:43:05.921 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:43:05 np0005485008 nova_compute[192512]: 2025-10-13 15:43:05.921 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:43:05 np0005485008 nova_compute[192512]: 2025-10-13 15:43:05.994 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance ab52c277-77ae-4d69-b9c9-74f1c5c5fa92 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 13 11:43:05 np0005485008 nova_compute[192512]: 2025-10-13 15:43:05.995 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:43:05 np0005485008 nova_compute[192512]: 2025-10-13 15:43:05.995 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:43:06 np0005485008 nova_compute[192512]: 2025-10-13 15:43:06.081 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating inventory in ProviderTree for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 13 11:43:06 np0005485008 nova_compute[192512]: 2025-10-13 15:43:06.121 2 ERROR nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [req-a8a491f1-1ab9-4419-85e9-f8ef86af1e3f] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID b038b2e7-0dfd-4adb-a174-3db2b96fc8ce.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-a8a491f1-1ab9-4419-85e9-f8ef86af1e3f"}]}#033[00m
Oct 13 11:43:06 np0005485008 nova_compute[192512]: 2025-10-13 15:43:06.155 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing inventories for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 13 11:43:06 np0005485008 nova_compute[192512]: 2025-10-13 15:43:06.177 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating ProviderTree inventory for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 13 11:43:06 np0005485008 nova_compute[192512]: 2025-10-13 15:43:06.177 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating inventory in ProviderTree for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 13 11:43:06 np0005485008 nova_compute[192512]: 2025-10-13 15:43:06.190 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing aggregate associations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 13 11:43:06 np0005485008 nova_compute[192512]: 2025-10-13 15:43:06.220 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing trait associations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce, traits: HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,HW_CPU_X86_SSE42,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,HW_CPU_X86_BMI2,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 13 11:43:06 np0005485008 nova_compute[192512]: 2025-10-13 15:43:06.263 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating inventory in ProviderTree for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 13 11:43:06 np0005485008 nova_compute[192512]: 2025-10-13 15:43:06.338 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updated inventory for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct 13 11:43:06 np0005485008 nova_compute[192512]: 2025-10-13 15:43:06.339 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct 13 11:43:06 np0005485008 nova_compute[192512]: 2025-10-13 15:43:06.339 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating inventory in ProviderTree for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 13 11:43:06 np0005485008 nova_compute[192512]: 2025-10-13 15:43:06.390 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 11:43:06 np0005485008 nova_compute[192512]: 2025-10-13 15:43:06.390 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:43:06 np0005485008 podman[215095]: 2025-10-13 15:43:06.786563349 +0000 UTC m=+0.081754901 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, name=ubi9-minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc.)
Oct 13 11:43:06 np0005485008 ovn_controller[94758]: 2025-10-13T15:43:06Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:61:93:5c 10.100.0.12
Oct 13 11:43:07 np0005485008 ovn_controller[94758]: 2025-10-13T15:43:07Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:61:93:5c 10.100.0.12
Oct 13 11:43:08 np0005485008 nova_compute[192512]: 2025-10-13 15:43:08.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:10 np0005485008 nova_compute[192512]: 2025-10-13 15:43:10.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:13 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:43:13.448 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:43:13 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:43:13.449 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 11:43:13 np0005485008 nova_compute[192512]: 2025-10-13 15:43:13.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:13 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:43:13.451 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:43:13 np0005485008 nova_compute[192512]: 2025-10-13 15:43:13.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:15 np0005485008 nova_compute[192512]: 2025-10-13 15:43:15.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:18 np0005485008 nova_compute[192512]: 2025-10-13 15:43:18.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:43:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:43:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:43:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:43:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:43:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:43:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:43:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:43:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:43:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:43:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:43:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:43:20 np0005485008 nova_compute[192512]: 2025-10-13 15:43:20.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:22 np0005485008 podman[215119]: 2025-10-13 15:43:22.77284158 +0000 UTC m=+0.072976241 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 11:43:22 np0005485008 podman[215120]: 2025-10-13 15:43:22.798534995 +0000 UTC m=+0.085578260 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 13 11:43:22 np0005485008 podman[215121]: 2025-10-13 15:43:22.804547742 +0000 UTC m=+0.093209407 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 11:43:22 np0005485008 podman[215122]: 2025-10-13 15:43:22.845928052 +0000 UTC m=+0.123120422 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 11:43:23 np0005485008 nova_compute[192512]: 2025-10-13 15:43:23.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:24 np0005485008 podman[215205]: 2025-10-13 15:43:24.754486437 +0000 UTC m=+0.060614837 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 13 11:43:25 np0005485008 nova_compute[192512]: 2025-10-13 15:43:25.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:28 np0005485008 nova_compute[192512]: 2025-10-13 15:43:28.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:28 np0005485008 ovn_controller[94758]: 2025-10-13T15:43:28Z|00032|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Oct 13 11:43:30 np0005485008 nova_compute[192512]: 2025-10-13 15:43:30.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:33 np0005485008 nova_compute[192512]: 2025-10-13 15:43:33.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:43:33.946 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:43:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:43:33.947 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:43:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:43:33.947 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:43:35 np0005485008 nova_compute[192512]: 2025-10-13 15:43:35.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:35 np0005485008 podman[202884]: time="2025-10-13T15:43:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:43:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:43:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 11:43:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:43:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3452 "" "Go-http-client/1.1"
Oct 13 11:43:37 np0005485008 podman[215226]: 2025-10-13 15:43:37.768363637 +0000 UTC m=+0.067392757 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., release=1755695350, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 13 11:43:38 np0005485008 nova_compute[192512]: 2025-10-13 15:43:38.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:40 np0005485008 nova_compute[192512]: 2025-10-13 15:43:40.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:41 np0005485008 nova_compute[192512]: 2025-10-13 15:43:41.934 2 DEBUG oslo_concurrency.lockutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "7144b3d2-d00d-489a-81a2-11dd796fb608" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:43:41 np0005485008 nova_compute[192512]: 2025-10-13 15:43:41.935 2 DEBUG oslo_concurrency.lockutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "7144b3d2-d00d-489a-81a2-11dd796fb608" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:43:41 np0005485008 nova_compute[192512]: 2025-10-13 15:43:41.951 2 DEBUG nova.compute.manager [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.030 2 DEBUG oslo_concurrency.lockutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.031 2 DEBUG oslo_concurrency.lockutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.036 2 DEBUG nova.virt.hardware [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.037 2 INFO nova.compute.claims [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.152 2 DEBUG nova.compute.provider_tree [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.186 2 DEBUG nova.scheduler.client.report [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.267 2 DEBUG oslo_concurrency.lockutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.268 2 DEBUG nova.compute.manager [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.384 2 DEBUG nova.compute.manager [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.384 2 DEBUG nova.network.neutron [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.481 2 INFO nova.virt.libvirt.driver [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.571 2 DEBUG nova.compute.manager [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.745 2 DEBUG nova.compute.manager [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.746 2 DEBUG nova.virt.libvirt.driver [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.746 2 INFO nova.virt.libvirt.driver [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Creating image(s)#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.747 2 DEBUG oslo_concurrency.lockutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "/var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.747 2 DEBUG oslo_concurrency.lockutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "/var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.748 2 DEBUG oslo_concurrency.lockutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "/var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.762 2 DEBUG oslo_concurrency.processutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.826 2 DEBUG oslo_concurrency.processutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.827 2 DEBUG oslo_concurrency.lockutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.828 2 DEBUG oslo_concurrency.lockutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.838 2 DEBUG oslo_concurrency.processutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.895 2 DEBUG oslo_concurrency.processutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.897 2 DEBUG oslo_concurrency.processutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.946 2 DEBUG oslo_concurrency.processutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608/disk 1073741824" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.947 2 DEBUG oslo_concurrency.lockutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:43:42 np0005485008 nova_compute[192512]: 2025-10-13 15:43:42.948 2 DEBUG oslo_concurrency.processutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:43:43 np0005485008 nova_compute[192512]: 2025-10-13 15:43:43.035 2 DEBUG oslo_concurrency.processutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:43:43 np0005485008 nova_compute[192512]: 2025-10-13 15:43:43.036 2 DEBUG nova.virt.disk.api [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Checking if we can resize image /var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 11:43:43 np0005485008 nova_compute[192512]: 2025-10-13 15:43:43.036 2 DEBUG oslo_concurrency.processutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:43:43 np0005485008 nova_compute[192512]: 2025-10-13 15:43:43.094 2 DEBUG oslo_concurrency.processutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:43:43 np0005485008 nova_compute[192512]: 2025-10-13 15:43:43.095 2 DEBUG nova.virt.disk.api [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Cannot resize image /var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 11:43:43 np0005485008 nova_compute[192512]: 2025-10-13 15:43:43.095 2 DEBUG nova.objects.instance [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lazy-loading 'migration_context' on Instance uuid 7144b3d2-d00d-489a-81a2-11dd796fb608 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:43:43 np0005485008 nova_compute[192512]: 2025-10-13 15:43:43.110 2 DEBUG nova.virt.libvirt.driver [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 13 11:43:43 np0005485008 nova_compute[192512]: 2025-10-13 15:43:43.110 2 DEBUG nova.virt.libvirt.driver [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Ensure instance console log exists: /var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 13 11:43:43 np0005485008 nova_compute[192512]: 2025-10-13 15:43:43.111 2 DEBUG oslo_concurrency.lockutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:43:43 np0005485008 nova_compute[192512]: 2025-10-13 15:43:43.111 2 DEBUG oslo_concurrency.lockutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:43:43 np0005485008 nova_compute[192512]: 2025-10-13 15:43:43.111 2 DEBUG oslo_concurrency.lockutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:43:43 np0005485008 nova_compute[192512]: 2025-10-13 15:43:43.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:44 np0005485008 nova_compute[192512]: 2025-10-13 15:43:44.066 2 DEBUG nova.network.neutron [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Successfully created port: 62269976-0b06-4e64-9439-0ec2ac44f78c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 13 11:43:45 np0005485008 nova_compute[192512]: 2025-10-13 15:43:45.311 2 DEBUG nova.network.neutron [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Successfully updated port: 62269976-0b06-4e64-9439-0ec2ac44f78c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 13 11:43:45 np0005485008 nova_compute[192512]: 2025-10-13 15:43:45.338 2 DEBUG oslo_concurrency.lockutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "refresh_cache-7144b3d2-d00d-489a-81a2-11dd796fb608" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:43:45 np0005485008 nova_compute[192512]: 2025-10-13 15:43:45.339 2 DEBUG oslo_concurrency.lockutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquired lock "refresh_cache-7144b3d2-d00d-489a-81a2-11dd796fb608" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:43:45 np0005485008 nova_compute[192512]: 2025-10-13 15:43:45.339 2 DEBUG nova.network.neutron [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 11:43:45 np0005485008 nova_compute[192512]: 2025-10-13 15:43:45.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:45 np0005485008 nova_compute[192512]: 2025-10-13 15:43:45.445 2 DEBUG nova.compute.manager [req-1e8397c5-d170-4614-a30c-c51631eef970 req-4915396b-d136-4739-8ec8-58bea55388be 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Received event network-changed-62269976-0b06-4e64-9439-0ec2ac44f78c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:43:45 np0005485008 nova_compute[192512]: 2025-10-13 15:43:45.446 2 DEBUG nova.compute.manager [req-1e8397c5-d170-4614-a30c-c51631eef970 req-4915396b-d136-4739-8ec8-58bea55388be 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Refreshing instance network info cache due to event network-changed-62269976-0b06-4e64-9439-0ec2ac44f78c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 13 11:43:45 np0005485008 nova_compute[192512]: 2025-10-13 15:43:45.446 2 DEBUG oslo_concurrency.lockutils [req-1e8397c5-d170-4614-a30c-c51631eef970 req-4915396b-d136-4739-8ec8-58bea55388be 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-7144b3d2-d00d-489a-81a2-11dd796fb608" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:43:45 np0005485008 nova_compute[192512]: 2025-10-13 15:43:45.512 2 DEBUG nova.network.neutron [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.619 2 DEBUG nova.network.neutron [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Updating instance_info_cache with network_info: [{"id": "62269976-0b06-4e64-9439-0ec2ac44f78c", "address": "fa:16:3e:6e:e3:5f", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62269976-0b", "ovs_interfaceid": "62269976-0b06-4e64-9439-0ec2ac44f78c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.644 2 DEBUG oslo_concurrency.lockutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Releasing lock "refresh_cache-7144b3d2-d00d-489a-81a2-11dd796fb608" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.644 2 DEBUG nova.compute.manager [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Instance network_info: |[{"id": "62269976-0b06-4e64-9439-0ec2ac44f78c", "address": "fa:16:3e:6e:e3:5f", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62269976-0b", "ovs_interfaceid": "62269976-0b06-4e64-9439-0ec2ac44f78c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.645 2 DEBUG oslo_concurrency.lockutils [req-1e8397c5-d170-4614-a30c-c51631eef970 req-4915396b-d136-4739-8ec8-58bea55388be 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-7144b3d2-d00d-489a-81a2-11dd796fb608" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.646 2 DEBUG nova.network.neutron [req-1e8397c5-d170-4614-a30c-c51631eef970 req-4915396b-d136-4739-8ec8-58bea55388be 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Refreshing network info cache for port 62269976-0b06-4e64-9439-0ec2ac44f78c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.650 2 DEBUG nova.virt.libvirt.driver [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Start _get_guest_xml network_info=[{"id": "62269976-0b06-4e64-9439-0ec2ac44f78c", "address": "fa:16:3e:6e:e3:5f", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62269976-0b", "ovs_interfaceid": "62269976-0b06-4e64-9439-0ec2ac44f78c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'boot_index': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': 'dcd9fbd3-16ab-46e1-976e-0576b433c9d5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.656 2 WARNING nova.virt.libvirt.driver [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.662 2 DEBUG nova.virt.libvirt.host [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.663 2 DEBUG nova.virt.libvirt.host [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.672 2 DEBUG nova.virt.libvirt.host [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.673 2 DEBUG nova.virt.libvirt.host [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.673 2 DEBUG nova.virt.libvirt.driver [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.674 2 DEBUG nova.virt.hardware [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-13T15:39:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ddff5c88-cac9-460e-8ffb-1e9b9a7c2a59',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.675 2 DEBUG nova.virt.hardware [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.675 2 DEBUG nova.virt.hardware [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.676 2 DEBUG nova.virt.hardware [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.676 2 DEBUG nova.virt.hardware [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.676 2 DEBUG nova.virt.hardware [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.677 2 DEBUG nova.virt.hardware [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.677 2 DEBUG nova.virt.hardware [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.678 2 DEBUG nova.virt.hardware [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.678 2 DEBUG nova.virt.hardware [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.678 2 DEBUG nova.virt.hardware [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.685 2 DEBUG nova.virt.libvirt.vif [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T15:43:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-786066552',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-786066552',id=4,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de23aa1f8b1f466e8bfa712e3140ce54',ramdisk_id='',reservation_id='r-ude8s7l8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteActionsViaActuator-836873667',owner_user_name='tempest-TestExecuteActionsViaActuator-836873667-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:43:42Z,user_data=None,user_id='4732dfe3d815487f863c441d326f4231',uuid=7144b3d2-d00d-489a-81a2-11dd796fb608,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "62269976-0b06-4e64-9439-0ec2ac44f78c", "address": "fa:16:3e:6e:e3:5f", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62269976-0b", "ovs_interfaceid": "62269976-0b06-4e64-9439-0ec2ac44f78c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.686 2 DEBUG nova.network.os_vif_util [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Converting VIF {"id": "62269976-0b06-4e64-9439-0ec2ac44f78c", "address": "fa:16:3e:6e:e3:5f", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62269976-0b", "ovs_interfaceid": "62269976-0b06-4e64-9439-0ec2ac44f78c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.686 2 DEBUG nova.network.os_vif_util [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:e3:5f,bridge_name='br-int',has_traffic_filtering=True,id=62269976-0b06-4e64-9439-0ec2ac44f78c,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62269976-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.688 2 DEBUG nova.objects.instance [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7144b3d2-d00d-489a-81a2-11dd796fb608 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.706 2 DEBUG nova.virt.libvirt.driver [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] End _get_guest_xml xml=<domain type="kvm">
Oct 13 11:43:46 np0005485008 nova_compute[192512]:  <uuid>7144b3d2-d00d-489a-81a2-11dd796fb608</uuid>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:  <name>instance-00000004</name>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:  <memory>131072</memory>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:  <vcpu>1</vcpu>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:  <metadata>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <nova:name>tempest-TestExecuteActionsViaActuator-server-786066552</nova:name>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <nova:creationTime>2025-10-13 15:43:46</nova:creationTime>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <nova:flavor name="m1.nano">
Oct 13 11:43:46 np0005485008 nova_compute[192512]:        <nova:memory>128</nova:memory>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:        <nova:disk>1</nova:disk>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:        <nova:swap>0</nova:swap>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:        <nova:ephemeral>0</nova:ephemeral>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:        <nova:vcpus>1</nova:vcpus>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      </nova:flavor>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <nova:owner>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:        <nova:user uuid="4732dfe3d815487f863c441d326f4231">tempest-TestExecuteActionsViaActuator-836873667-project-admin</nova:user>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:        <nova:project uuid="de23aa1f8b1f466e8bfa712e3140ce54">tempest-TestExecuteActionsViaActuator-836873667</nova:project>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      </nova:owner>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <nova:root type="image" uuid="dcd9fbd3-16ab-46e1-976e-0576b433c9d5"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <nova:ports>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:        <nova:port uuid="62269976-0b06-4e64-9439-0ec2ac44f78c">
Oct 13 11:43:46 np0005485008 nova_compute[192512]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:        </nova:port>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      </nova:ports>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    </nova:instance>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:  </metadata>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:  <sysinfo type="smbios">
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <system>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <entry name="manufacturer">RDO</entry>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <entry name="product">OpenStack Compute</entry>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <entry name="serial">7144b3d2-d00d-489a-81a2-11dd796fb608</entry>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <entry name="uuid">7144b3d2-d00d-489a-81a2-11dd796fb608</entry>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <entry name="family">Virtual Machine</entry>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    </system>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:  </sysinfo>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:  <os>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <boot dev="hd"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <smbios mode="sysinfo"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:  </os>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:  <features>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <acpi/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <apic/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <vmcoreinfo/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:  </features>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:  <clock offset="utc">
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <timer name="pit" tickpolicy="delay"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <timer name="hpet" present="no"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:  </clock>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:  <cpu mode="host-model" match="exact">
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <topology sockets="1" cores="1" threads="1"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:  </cpu>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:  <devices>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <disk type="file" device="disk">
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <driver name="qemu" type="qcow2" cache="none"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608/disk"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <target dev="vda" bus="virtio"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    </disk>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <disk type="file" device="cdrom">
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <driver name="qemu" type="raw" cache="none"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608/disk.config"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <target dev="sda" bus="sata"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    </disk>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <interface type="ethernet">
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <mac address="fa:16:3e:6e:e3:5f"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <driver name="vhost" rx_queue_size="512"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <mtu size="1442"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <target dev="tap62269976-0b"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    </interface>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <serial type="pty">
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <log file="/var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608/console.log" append="off"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    </serial>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <video>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    </video>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <input type="tablet" bus="usb"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <rng model="virtio">
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <backend model="random">/dev/urandom</backend>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    </rng>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <controller type="usb" index="0"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    <memballoon model="virtio">
Oct 13 11:43:46 np0005485008 nova_compute[192512]:      <stats period="10"/>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:    </memballoon>
Oct 13 11:43:46 np0005485008 nova_compute[192512]:  </devices>
Oct 13 11:43:46 np0005485008 nova_compute[192512]: </domain>
Oct 13 11:43:46 np0005485008 nova_compute[192512]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.707 2 DEBUG nova.compute.manager [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Preparing to wait for external event network-vif-plugged-62269976-0b06-4e64-9439-0ec2ac44f78c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.708 2 DEBUG oslo_concurrency.lockutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "7144b3d2-d00d-489a-81a2-11dd796fb608-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.709 2 DEBUG oslo_concurrency.lockutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "7144b3d2-d00d-489a-81a2-11dd796fb608-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.709 2 DEBUG oslo_concurrency.lockutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "7144b3d2-d00d-489a-81a2-11dd796fb608-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.711 2 DEBUG nova.virt.libvirt.vif [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T15:43:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-786066552',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-786066552',id=4,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de23aa1f8b1f466e8bfa712e3140ce54',ramdisk_id='',reservation_id='r-ude8s7l8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteActionsViaActuator-836873667',owner_user_name='tempest-TestExecuteActionsViaActuator-836873667-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:43:42Z,user_data=None,user_id='4732dfe3d815487f863c441d326f4231',uuid=7144b3d2-d00d-489a-81a2-11dd796fb608,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "62269976-0b06-4e64-9439-0ec2ac44f78c", "address": "fa:16:3e:6e:e3:5f", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62269976-0b", "ovs_interfaceid": "62269976-0b06-4e64-9439-0ec2ac44f78c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.712 2 DEBUG nova.network.os_vif_util [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Converting VIF {"id": "62269976-0b06-4e64-9439-0ec2ac44f78c", "address": "fa:16:3e:6e:e3:5f", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62269976-0b", "ovs_interfaceid": "62269976-0b06-4e64-9439-0ec2ac44f78c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.713 2 DEBUG nova.network.os_vif_util [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:e3:5f,bridge_name='br-int',has_traffic_filtering=True,id=62269976-0b06-4e64-9439-0ec2ac44f78c,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62269976-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.714 2 DEBUG os_vif [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:e3:5f,bridge_name='br-int',has_traffic_filtering=True,id=62269976-0b06-4e64-9439-0ec2ac44f78c,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62269976-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.715 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.716 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.720 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62269976-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.721 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap62269976-0b, col_values=(('external_ids', {'iface-id': '62269976-0b06-4e64-9439-0ec2ac44f78c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:e3:5f', 'vm-uuid': '7144b3d2-d00d-489a-81a2-11dd796fb608'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:43:46 np0005485008 NetworkManager[51587]: <info>  [1760370226.7631] manager: (tap62269976-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.773 2 INFO os_vif [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:e3:5f,bridge_name='br-int',has_traffic_filtering=True,id=62269976-0b06-4e64-9439-0ec2ac44f78c,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62269976-0b')#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.837 2 DEBUG nova.virt.libvirt.driver [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.837 2 DEBUG nova.virt.libvirt.driver [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.838 2 DEBUG nova.virt.libvirt.driver [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] No VIF found with MAC fa:16:3e:6e:e3:5f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 13 11:43:46 np0005485008 nova_compute[192512]: 2025-10-13 15:43:46.838 2 INFO nova.virt.libvirt.driver [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Using config drive#033[00m
Oct 13 11:43:47 np0005485008 nova_compute[192512]: 2025-10-13 15:43:47.244 2 INFO nova.virt.libvirt.driver [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Creating config drive at /var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608/disk.config#033[00m
Oct 13 11:43:47 np0005485008 nova_compute[192512]: 2025-10-13 15:43:47.250 2 DEBUG oslo_concurrency.processutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8q956_wz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:43:47 np0005485008 nova_compute[192512]: 2025-10-13 15:43:47.377 2 DEBUG oslo_concurrency.processutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8q956_wz" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:43:47 np0005485008 kernel: tap62269976-0b: entered promiscuous mode
Oct 13 11:43:47 np0005485008 NetworkManager[51587]: <info>  [1760370227.4602] manager: (tap62269976-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/27)
Oct 13 11:43:47 np0005485008 ovn_controller[94758]: 2025-10-13T15:43:47Z|00033|binding|INFO|Claiming lport 62269976-0b06-4e64-9439-0ec2ac44f78c for this chassis.
Oct 13 11:43:47 np0005485008 nova_compute[192512]: 2025-10-13 15:43:47.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:47 np0005485008 ovn_controller[94758]: 2025-10-13T15:43:47Z|00034|binding|INFO|62269976-0b06-4e64-9439-0ec2ac44f78c: Claiming fa:16:3e:6e:e3:5f 10.100.0.14
Oct 13 11:43:47 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:43:47.474 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:e3:5f 10.100.0.14'], port_security=['fa:16:3e:6e:e3:5f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7144b3d2-d00d-489a-81a2-11dd796fb608', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de23aa1f8b1f466e8bfa712e3140ce54', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2246fb33-460f-432b-aa37-27ea09f6fda6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c73c3f50-1be2-4143-b9e3-a5e32e8515a9, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=62269976-0b06-4e64-9439-0ec2ac44f78c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:43:47 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:43:47.476 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 62269976-0b06-4e64-9439-0ec2ac44f78c in datapath 844f3185-3b42-4e49-9ef5-690ae5e238a0 bound to our chassis#033[00m
Oct 13 11:43:47 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:43:47.480 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 844f3185-3b42-4e49-9ef5-690ae5e238a0#033[00m
Oct 13 11:43:47 np0005485008 ovn_controller[94758]: 2025-10-13T15:43:47Z|00035|binding|INFO|Setting lport 62269976-0b06-4e64-9439-0ec2ac44f78c up in Southbound
Oct 13 11:43:47 np0005485008 ovn_controller[94758]: 2025-10-13T15:43:47Z|00036|binding|INFO|Setting lport 62269976-0b06-4e64-9439-0ec2ac44f78c ovn-installed in OVS
Oct 13 11:43:47 np0005485008 nova_compute[192512]: 2025-10-13 15:43:47.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:47 np0005485008 nova_compute[192512]: 2025-10-13 15:43:47.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:47 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:43:47.500 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[ef53e6e3-99e2-4983-a305-dc5bbdde243e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:43:47 np0005485008 systemd-udevd[215281]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:43:47 np0005485008 systemd-machined[152551]: New machine qemu-2-instance-00000004.
Oct 13 11:43:47 np0005485008 NetworkManager[51587]: <info>  [1760370227.5219] device (tap62269976-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 11:43:47 np0005485008 NetworkManager[51587]: <info>  [1760370227.5229] device (tap62269976-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 11:43:47 np0005485008 systemd[1]: Started Virtual Machine qemu-2-instance-00000004.
Oct 13 11:43:47 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:43:47.541 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[ad0df406-c4a1-45ea-8db0-5b1b63942cb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:43:47 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:43:47.544 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[72b51183-70a6-43aa-9bd4-3bf880e13ca8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:43:47 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:43:47.577 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[bfefcef9-9f5f-40a0-891e-7cc7dca1586c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:43:47 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:43:47.598 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[da54d462-17da-4af7-a588-d9556dc9a861]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap844f3185-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:dc:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 371615, 'reachable_time': 29213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 215294, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:43:47 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:43:47.619 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[acbc51dd-c937-423d-b89c-b28224eeba6d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371629, 'tstamp': 371629}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 215295, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371633, 'tstamp': 371633}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 215295, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:43:47 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:43:47.622 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap844f3185-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:43:47 np0005485008 nova_compute[192512]: 2025-10-13 15:43:47.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:47 np0005485008 nova_compute[192512]: 2025-10-13 15:43:47.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:47 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:43:47.626 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap844f3185-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:43:47 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:43:47.627 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:43:47 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:43:47.627 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap844f3185-30, col_values=(('external_ids', {'iface-id': '94a96fe0-138f-4b15-b38e-a8f08a7e2933'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:43:47 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:43:47.628 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:43:47 np0005485008 nova_compute[192512]: 2025-10-13 15:43:47.838 2 DEBUG nova.compute.manager [req-6e143627-dda4-4248-8036-7ccd243a5d06 req-7e70d112-2266-4eb8-bdd8-5e051e7f9752 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Received event network-vif-plugged-62269976-0b06-4e64-9439-0ec2ac44f78c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:43:47 np0005485008 nova_compute[192512]: 2025-10-13 15:43:47.840 2 DEBUG oslo_concurrency.lockutils [req-6e143627-dda4-4248-8036-7ccd243a5d06 req-7e70d112-2266-4eb8-bdd8-5e051e7f9752 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "7144b3d2-d00d-489a-81a2-11dd796fb608-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:43:47 np0005485008 nova_compute[192512]: 2025-10-13 15:43:47.841 2 DEBUG oslo_concurrency.lockutils [req-6e143627-dda4-4248-8036-7ccd243a5d06 req-7e70d112-2266-4eb8-bdd8-5e051e7f9752 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "7144b3d2-d00d-489a-81a2-11dd796fb608-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:43:47 np0005485008 nova_compute[192512]: 2025-10-13 15:43:47.841 2 DEBUG oslo_concurrency.lockutils [req-6e143627-dda4-4248-8036-7ccd243a5d06 req-7e70d112-2266-4eb8-bdd8-5e051e7f9752 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "7144b3d2-d00d-489a-81a2-11dd796fb608-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:43:47 np0005485008 nova_compute[192512]: 2025-10-13 15:43:47.841 2 DEBUG nova.compute.manager [req-6e143627-dda4-4248-8036-7ccd243a5d06 req-7e70d112-2266-4eb8-bdd8-5e051e7f9752 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Processing event network-vif-plugged-62269976-0b06-4e64-9439-0ec2ac44f78c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.368 2 DEBUG nova.compute.manager [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.369 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370228.3679843, 7144b3d2-d00d-489a-81a2-11dd796fb608 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.370 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] VM Started (Lifecycle Event)#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.374 2 DEBUG nova.virt.libvirt.driver [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.378 2 INFO nova.virt.libvirt.driver [-] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Instance spawned successfully.#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.378 2 DEBUG nova.virt.libvirt.driver [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.425 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.429 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.448 2 DEBUG nova.virt.libvirt.driver [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.448 2 DEBUG nova.virt.libvirt.driver [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.449 2 DEBUG nova.virt.libvirt.driver [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.450 2 DEBUG nova.virt.libvirt.driver [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.450 2 DEBUG nova.virt.libvirt.driver [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.451 2 DEBUG nova.virt.libvirt.driver [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.459 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.460 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370228.3682442, 7144b3d2-d00d-489a-81a2-11dd796fb608 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.460 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] VM Paused (Lifecycle Event)#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.489 2 DEBUG nova.network.neutron [req-1e8397c5-d170-4614-a30c-c51631eef970 req-4915396b-d136-4739-8ec8-58bea55388be 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Updated VIF entry in instance network info cache for port 62269976-0b06-4e64-9439-0ec2ac44f78c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.490 2 DEBUG nova.network.neutron [req-1e8397c5-d170-4614-a30c-c51631eef970 req-4915396b-d136-4739-8ec8-58bea55388be 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Updating instance_info_cache with network_info: [{"id": "62269976-0b06-4e64-9439-0ec2ac44f78c", "address": "fa:16:3e:6e:e3:5f", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62269976-0b", "ovs_interfaceid": "62269976-0b06-4e64-9439-0ec2ac44f78c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.544 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.548 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370228.3736086, 7144b3d2-d00d-489a-81a2-11dd796fb608 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.549 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] VM Resumed (Lifecycle Event)#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.574 2 DEBUG oslo_concurrency.lockutils [req-1e8397c5-d170-4614-a30c-c51631eef970 req-4915396b-d136-4739-8ec8-58bea55388be 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-7144b3d2-d00d-489a-81a2-11dd796fb608" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.585 2 INFO nova.compute.manager [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Took 5.84 seconds to spawn the instance on the hypervisor.#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.585 2 DEBUG nova.compute.manager [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.587 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.593 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.666 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.723 2 INFO nova.compute.manager [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Took 6.72 seconds to build instance.#033[00m
Oct 13 11:43:48 np0005485008 nova_compute[192512]: 2025-10-13 15:43:48.764 2 DEBUG oslo_concurrency.lockutils [None req-4f76a806-deee-4e68-96e3-04f6d6fe8d91 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "7144b3d2-d00d-489a-81a2-11dd796fb608" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:43:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:43:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:43:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:43:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:43:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:43:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:43:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:43:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:43:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:43:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:43:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:43:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:43:49 np0005485008 nova_compute[192512]: 2025-10-13 15:43:49.947 2 DEBUG nova.compute.manager [req-b6f05e39-dd41-4d3a-a94d-62541e453537 req-bdf9f5ca-280e-46ca-86de-23ccbf6df9fc 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Received event network-vif-plugged-62269976-0b06-4e64-9439-0ec2ac44f78c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:43:49 np0005485008 nova_compute[192512]: 2025-10-13 15:43:49.949 2 DEBUG oslo_concurrency.lockutils [req-b6f05e39-dd41-4d3a-a94d-62541e453537 req-bdf9f5ca-280e-46ca-86de-23ccbf6df9fc 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "7144b3d2-d00d-489a-81a2-11dd796fb608-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:43:49 np0005485008 nova_compute[192512]: 2025-10-13 15:43:49.950 2 DEBUG oslo_concurrency.lockutils [req-b6f05e39-dd41-4d3a-a94d-62541e453537 req-bdf9f5ca-280e-46ca-86de-23ccbf6df9fc 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "7144b3d2-d00d-489a-81a2-11dd796fb608-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:43:49 np0005485008 nova_compute[192512]: 2025-10-13 15:43:49.950 2 DEBUG oslo_concurrency.lockutils [req-b6f05e39-dd41-4d3a-a94d-62541e453537 req-bdf9f5ca-280e-46ca-86de-23ccbf6df9fc 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "7144b3d2-d00d-489a-81a2-11dd796fb608-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:43:49 np0005485008 nova_compute[192512]: 2025-10-13 15:43:49.951 2 DEBUG nova.compute.manager [req-b6f05e39-dd41-4d3a-a94d-62541e453537 req-bdf9f5ca-280e-46ca-86de-23ccbf6df9fc 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] No waiting events found dispatching network-vif-plugged-62269976-0b06-4e64-9439-0ec2ac44f78c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:43:49 np0005485008 nova_compute[192512]: 2025-10-13 15:43:49.951 2 WARNING nova.compute.manager [req-b6f05e39-dd41-4d3a-a94d-62541e453537 req-bdf9f5ca-280e-46ca-86de-23ccbf6df9fc 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Received unexpected event network-vif-plugged-62269976-0b06-4e64-9439-0ec2ac44f78c for instance with vm_state active and task_state None.#033[00m
Oct 13 11:43:50 np0005485008 nova_compute[192512]: 2025-10-13 15:43:50.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:51 np0005485008 nova_compute[192512]: 2025-10-13 15:43:51.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:53 np0005485008 podman[215306]: 2025-10-13 15:43:53.785933616 +0000 UTC m=+0.065699064 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 11:43:53 np0005485008 podman[215304]: 2025-10-13 15:43:53.789077784 +0000 UTC m=+0.078230833 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:43:53 np0005485008 podman[215305]: 2025-10-13 15:43:53.806384049 +0000 UTC m=+0.095071784 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 11:43:53 np0005485008 podman[215307]: 2025-10-13 15:43:53.828664429 +0000 UTC m=+0.102031719 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251009)
Oct 13 11:43:55 np0005485008 nova_compute[192512]: 2025-10-13 15:43:55.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:55 np0005485008 podman[215390]: 2025-10-13 15:43:55.755742169 +0000 UTC m=+0.058043218 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 11:43:56 np0005485008 nova_compute[192512]: 2025-10-13 15:43:56.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:43:59 np0005485008 nova_compute[192512]: 2025-10-13 15:43:59.392 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:44:00 np0005485008 nova_compute[192512]: 2025-10-13 15:44:00.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:44:00 np0005485008 nova_compute[192512]: 2025-10-13 15:44:00.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 11:44:00 np0005485008 nova_compute[192512]: 2025-10-13 15:44:00.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:00 np0005485008 ovn_controller[94758]: 2025-10-13T15:44:00Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6e:e3:5f 10.100.0.14
Oct 13 11:44:00 np0005485008 ovn_controller[94758]: 2025-10-13T15:44:00Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6e:e3:5f 10.100.0.14
Oct 13 11:44:01 np0005485008 nova_compute[192512]: 2025-10-13 15:44:01.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:02 np0005485008 nova_compute[192512]: 2025-10-13 15:44:02.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:44:02 np0005485008 nova_compute[192512]: 2025-10-13 15:44:02.430 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:44:03 np0005485008 nova_compute[192512]: 2025-10-13 15:44:03.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:44:03 np0005485008 nova_compute[192512]: 2025-10-13 15:44:03.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:44:04 np0005485008 nova_compute[192512]: 2025-10-13 15:44:04.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:44:04 np0005485008 nova_compute[192512]: 2025-10-13 15:44:04.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 11:44:04 np0005485008 nova_compute[192512]: 2025-10-13 15:44:04.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 11:44:04 np0005485008 nova_compute[192512]: 2025-10-13 15:44:04.633 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "refresh_cache-ab52c277-77ae-4d69-b9c9-74f1c5c5fa92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:44:04 np0005485008 nova_compute[192512]: 2025-10-13 15:44:04.634 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquired lock "refresh_cache-ab52c277-77ae-4d69-b9c9-74f1c5c5fa92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:44:04 np0005485008 nova_compute[192512]: 2025-10-13 15:44:04.634 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 13 11:44:04 np0005485008 nova_compute[192512]: 2025-10-13 15:44:04.635 2 DEBUG nova.objects.instance [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ab52c277-77ae-4d69-b9c9-74f1c5c5fa92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:44:05 np0005485008 nova_compute[192512]: 2025-10-13 15:44:05.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:05 np0005485008 podman[202884]: time="2025-10-13T15:44:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:44:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:44:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 11:44:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:44:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3455 "" "Go-http-client/1.1"
Oct 13 11:44:06 np0005485008 nova_compute[192512]: 2025-10-13 15:44:06.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:08 np0005485008 nova_compute[192512]: 2025-10-13 15:44:08.299 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Updating instance_info_cache with network_info: [{"id": "6b295aff-c137-4935-b9a4-2b8d088fb4f6", "address": "fa:16:3e:61:93:5c", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b295aff-c1", "ovs_interfaceid": "6b295aff-c137-4935-b9a4-2b8d088fb4f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:44:08 np0005485008 nova_compute[192512]: 2025-10-13 15:44:08.384 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Releasing lock "refresh_cache-ab52c277-77ae-4d69-b9c9-74f1c5c5fa92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:44:08 np0005485008 nova_compute[192512]: 2025-10-13 15:44:08.385 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 13 11:44:08 np0005485008 nova_compute[192512]: 2025-10-13 15:44:08.385 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:44:08 np0005485008 nova_compute[192512]: 2025-10-13 15:44:08.386 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:44:08 np0005485008 podman[215424]: 2025-10-13 15:44:08.395734353 +0000 UTC m=+0.063829427 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, managed_by=edpm_ansible, config_id=edpm, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 11:44:08 np0005485008 nova_compute[192512]: 2025-10-13 15:44:08.442 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:44:08 np0005485008 nova_compute[192512]: 2025-10-13 15:44:08.443 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:44:08 np0005485008 nova_compute[192512]: 2025-10-13 15:44:08.443 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:44:08 np0005485008 nova_compute[192512]: 2025-10-13 15:44:08.443 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:44:08 np0005485008 nova_compute[192512]: 2025-10-13 15:44:08.569 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:44:08 np0005485008 nova_compute[192512]: 2025-10-13 15:44:08.630 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:44:08 np0005485008 nova_compute[192512]: 2025-10-13 15:44:08.631 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:44:08 np0005485008 nova_compute[192512]: 2025-10-13 15:44:08.691 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:44:08 np0005485008 nova_compute[192512]: 2025-10-13 15:44:08.696 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:44:08 np0005485008 nova_compute[192512]: 2025-10-13 15:44:08.775 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:44:08 np0005485008 nova_compute[192512]: 2025-10-13 15:44:08.776 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:44:08 np0005485008 nova_compute[192512]: 2025-10-13 15:44:08.835 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:44:08 np0005485008 nova_compute[192512]: 2025-10-13 15:44:08.964 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:44:08 np0005485008 nova_compute[192512]: 2025-10-13 15:44:08.965 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5583MB free_disk=73.41207122802734GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:44:08 np0005485008 nova_compute[192512]: 2025-10-13 15:44:08.965 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:44:08 np0005485008 nova_compute[192512]: 2025-10-13 15:44:08.966 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:44:09 np0005485008 nova_compute[192512]: 2025-10-13 15:44:09.104 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance ab52c277-77ae-4d69-b9c9-74f1c5c5fa92 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 13 11:44:09 np0005485008 nova_compute[192512]: 2025-10-13 15:44:09.105 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance 7144b3d2-d00d-489a-81a2-11dd796fb608 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 13 11:44:09 np0005485008 nova_compute[192512]: 2025-10-13 15:44:09.105 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:44:09 np0005485008 nova_compute[192512]: 2025-10-13 15:44:09.105 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:44:09 np0005485008 nova_compute[192512]: 2025-10-13 15:44:09.181 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:44:09 np0005485008 nova_compute[192512]: 2025-10-13 15:44:09.263 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:44:09 np0005485008 nova_compute[192512]: 2025-10-13 15:44:09.335 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 11:44:09 np0005485008 nova_compute[192512]: 2025-10-13 15:44:09.335 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:44:10 np0005485008 nova_compute[192512]: 2025-10-13 15:44:10.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:11 np0005485008 nova_compute[192512]: 2025-10-13 15:44:11.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:15 np0005485008 nova_compute[192512]: 2025-10-13 15:44:15.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:16 np0005485008 nova_compute[192512]: 2025-10-13 15:44:16.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:17 np0005485008 nova_compute[192512]: 2025-10-13 15:44:17.236 2 DEBUG oslo_concurrency.lockutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:44:17 np0005485008 nova_compute[192512]: 2025-10-13 15:44:17.236 2 DEBUG oslo_concurrency.lockutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:44:17 np0005485008 nova_compute[192512]: 2025-10-13 15:44:17.363 2 DEBUG nova.compute.manager [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 13 11:44:17 np0005485008 nova_compute[192512]: 2025-10-13 15:44:17.907 2 DEBUG oslo_concurrency.lockutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:44:17 np0005485008 nova_compute[192512]: 2025-10-13 15:44:17.908 2 DEBUG oslo_concurrency.lockutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:44:17 np0005485008 nova_compute[192512]: 2025-10-13 15:44:17.918 2 DEBUG nova.virt.hardware [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 13 11:44:17 np0005485008 nova_compute[192512]: 2025-10-13 15:44:17.918 2 INFO nova.compute.claims [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct 13 11:44:18 np0005485008 nova_compute[192512]: 2025-10-13 15:44:18.230 2 DEBUG nova.compute.provider_tree [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:44:18 np0005485008 nova_compute[192512]: 2025-10-13 15:44:18.268 2 DEBUG nova.scheduler.client.report [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:44:18 np0005485008 nova_compute[192512]: 2025-10-13 15:44:18.324 2 DEBUG oslo_concurrency.lockutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.416s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:44:18 np0005485008 nova_compute[192512]: 2025-10-13 15:44:18.325 2 DEBUG nova.compute.manager [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 13 11:44:18 np0005485008 nova_compute[192512]: 2025-10-13 15:44:18.496 2 DEBUG nova.compute.manager [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 13 11:44:18 np0005485008 nova_compute[192512]: 2025-10-13 15:44:18.496 2 DEBUG nova.network.neutron [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 13 11:44:18 np0005485008 nova_compute[192512]: 2025-10-13 15:44:18.545 2 INFO nova.virt.libvirt.driver [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 13 11:44:18 np0005485008 nova_compute[192512]: 2025-10-13 15:44:18.666 2 DEBUG nova.compute.manager [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 13 11:44:18 np0005485008 nova_compute[192512]: 2025-10-13 15:44:18.847 2 DEBUG nova.compute.manager [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 13 11:44:18 np0005485008 nova_compute[192512]: 2025-10-13 15:44:18.848 2 DEBUG nova.virt.libvirt.driver [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 13 11:44:18 np0005485008 nova_compute[192512]: 2025-10-13 15:44:18.848 2 INFO nova.virt.libvirt.driver [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Creating image(s)#033[00m
Oct 13 11:44:18 np0005485008 nova_compute[192512]: 2025-10-13 15:44:18.849 2 DEBUG oslo_concurrency.lockutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "/var/lib/nova/instances/3bef78b8-c7d4-43d6-a28f-39a5b4b4250a/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:44:18 np0005485008 nova_compute[192512]: 2025-10-13 15:44:18.849 2 DEBUG oslo_concurrency.lockutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "/var/lib/nova/instances/3bef78b8-c7d4-43d6-a28f-39a5b4b4250a/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:44:18 np0005485008 nova_compute[192512]: 2025-10-13 15:44:18.850 2 DEBUG oslo_concurrency.lockutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "/var/lib/nova/instances/3bef78b8-c7d4-43d6-a28f-39a5b4b4250a/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:44:18 np0005485008 nova_compute[192512]: 2025-10-13 15:44:18.861 2 DEBUG oslo_concurrency.processutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:44:18 np0005485008 nova_compute[192512]: 2025-10-13 15:44:18.924 2 DEBUG oslo_concurrency.processutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:44:18 np0005485008 nova_compute[192512]: 2025-10-13 15:44:18.926 2 DEBUG oslo_concurrency.lockutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:44:18 np0005485008 nova_compute[192512]: 2025-10-13 15:44:18.929 2 DEBUG oslo_concurrency.lockutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:44:18 np0005485008 nova_compute[192512]: 2025-10-13 15:44:18.953 2 DEBUG oslo_concurrency.processutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:44:19 np0005485008 nova_compute[192512]: 2025-10-13 15:44:19.023 2 DEBUG oslo_concurrency.processutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:44:19 np0005485008 nova_compute[192512]: 2025-10-13 15:44:19.025 2 DEBUG oslo_concurrency.processutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/3bef78b8-c7d4-43d6-a28f-39a5b4b4250a/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:44:19 np0005485008 nova_compute[192512]: 2025-10-13 15:44:19.057 2 DEBUG oslo_concurrency.processutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/3bef78b8-c7d4-43d6-a28f-39a5b4b4250a/disk 1073741824" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:44:19 np0005485008 nova_compute[192512]: 2025-10-13 15:44:19.059 2 DEBUG oslo_concurrency.lockutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:44:19 np0005485008 nova_compute[192512]: 2025-10-13 15:44:19.060 2 DEBUG oslo_concurrency.processutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:44:19 np0005485008 nova_compute[192512]: 2025-10-13 15:44:19.129 2 DEBUG oslo_concurrency.processutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:44:19 np0005485008 nova_compute[192512]: 2025-10-13 15:44:19.130 2 DEBUG nova.virt.disk.api [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Checking if we can resize image /var/lib/nova/instances/3bef78b8-c7d4-43d6-a28f-39a5b4b4250a/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 11:44:19 np0005485008 nova_compute[192512]: 2025-10-13 15:44:19.131 2 DEBUG oslo_concurrency.processutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3bef78b8-c7d4-43d6-a28f-39a5b4b4250a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:44:19 np0005485008 nova_compute[192512]: 2025-10-13 15:44:19.191 2 DEBUG oslo_concurrency.processutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3bef78b8-c7d4-43d6-a28f-39a5b4b4250a/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:44:19 np0005485008 nova_compute[192512]: 2025-10-13 15:44:19.193 2 DEBUG nova.virt.disk.api [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Cannot resize image /var/lib/nova/instances/3bef78b8-c7d4-43d6-a28f-39a5b4b4250a/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 11:44:19 np0005485008 nova_compute[192512]: 2025-10-13 15:44:19.193 2 DEBUG nova.objects.instance [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lazy-loading 'migration_context' on Instance uuid 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:44:19 np0005485008 nova_compute[192512]: 2025-10-13 15:44:19.334 2 DEBUG nova.virt.libvirt.driver [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 13 11:44:19 np0005485008 nova_compute[192512]: 2025-10-13 15:44:19.335 2 DEBUG nova.virt.libvirt.driver [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Ensure instance console log exists: /var/lib/nova/instances/3bef78b8-c7d4-43d6-a28f-39a5b4b4250a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 13 11:44:19 np0005485008 nova_compute[192512]: 2025-10-13 15:44:19.335 2 DEBUG oslo_concurrency.lockutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:44:19 np0005485008 nova_compute[192512]: 2025-10-13 15:44:19.335 2 DEBUG oslo_concurrency.lockutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:44:19 np0005485008 nova_compute[192512]: 2025-10-13 15:44:19.336 2 DEBUG oslo_concurrency.lockutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:44:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:44:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:44:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:44:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:44:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:44:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:44:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:44:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:44:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:44:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:44:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:44:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:44:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:44:20.581 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:44:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:44:20.582 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 11:44:20 np0005485008 nova_compute[192512]: 2025-10-13 15:44:20.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:21 np0005485008 nova_compute[192512]: 2025-10-13 15:44:21.081 2 DEBUG nova.network.neutron [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Successfully created port: 50a7b2c6-0bc6-4214-b676-2fd1cce8198f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 13 11:44:21 np0005485008 nova_compute[192512]: 2025-10-13 15:44:21.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:24 np0005485008 podman[215487]: 2025-10-13 15:44:24.805674561 +0000 UTC m=+0.086062404 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 11:44:24 np0005485008 podman[215489]: 2025-10-13 15:44:24.813770693 +0000 UTC m=+0.078472839 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 11:44:24 np0005485008 podman[215488]: 2025-10-13 15:44:24.843582979 +0000 UTC m=+0.115443477 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 11:44:24 np0005485008 podman[215490]: 2025-10-13 15:44:24.844878729 +0000 UTC m=+0.105596661 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 11:44:25 np0005485008 nova_compute[192512]: 2025-10-13 15:44:25.188 2 DEBUG nova.network.neutron [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Successfully updated port: 50a7b2c6-0bc6-4214-b676-2fd1cce8198f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 13 11:44:25 np0005485008 nova_compute[192512]: 2025-10-13 15:44:25.208 2 DEBUG oslo_concurrency.lockutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "refresh_cache-3bef78b8-c7d4-43d6-a28f-39a5b4b4250a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:44:25 np0005485008 nova_compute[192512]: 2025-10-13 15:44:25.208 2 DEBUG oslo_concurrency.lockutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquired lock "refresh_cache-3bef78b8-c7d4-43d6-a28f-39a5b4b4250a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:44:25 np0005485008 nova_compute[192512]: 2025-10-13 15:44:25.209 2 DEBUG nova.network.neutron [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 11:44:25 np0005485008 nova_compute[192512]: 2025-10-13 15:44:25.316 2 DEBUG nova.compute.manager [req-a56a99d1-345c-4847-922f-12ffdfe7da18 req-92938e97-2e6a-46ed-ba5a-516de7adcec6 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Received event network-changed-50a7b2c6-0bc6-4214-b676-2fd1cce8198f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:44:25 np0005485008 nova_compute[192512]: 2025-10-13 15:44:25.316 2 DEBUG nova.compute.manager [req-a56a99d1-345c-4847-922f-12ffdfe7da18 req-92938e97-2e6a-46ed-ba5a-516de7adcec6 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Refreshing instance network info cache due to event network-changed-50a7b2c6-0bc6-4214-b676-2fd1cce8198f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 13 11:44:25 np0005485008 nova_compute[192512]: 2025-10-13 15:44:25.317 2 DEBUG oslo_concurrency.lockutils [req-a56a99d1-345c-4847-922f-12ffdfe7da18 req-92938e97-2e6a-46ed-ba5a-516de7adcec6 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-3bef78b8-c7d4-43d6-a28f-39a5b4b4250a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:44:25 np0005485008 nova_compute[192512]: 2025-10-13 15:44:25.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:26 np0005485008 nova_compute[192512]: 2025-10-13 15:44:26.098 2 DEBUG nova.network.neutron [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 13 11:44:26 np0005485008 podman[215567]: 2025-10-13 15:44:26.763540371 +0000 UTC m=+0.066287859 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 13 11:44:26 np0005485008 nova_compute[192512]: 2025-10-13 15:44:26.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:27 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:44:27.584 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.335 2 DEBUG nova.network.neutron [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Updating instance_info_cache with network_info: [{"id": "50a7b2c6-0bc6-4214-b676-2fd1cce8198f", "address": "fa:16:3e:d3:d5:b5", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a7b2c6-0b", "ovs_interfaceid": "50a7b2c6-0bc6-4214-b676-2fd1cce8198f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.367 2 DEBUG oslo_concurrency.lockutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Releasing lock "refresh_cache-3bef78b8-c7d4-43d6-a28f-39a5b4b4250a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.368 2 DEBUG nova.compute.manager [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Instance network_info: |[{"id": "50a7b2c6-0bc6-4214-b676-2fd1cce8198f", "address": "fa:16:3e:d3:d5:b5", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a7b2c6-0b", "ovs_interfaceid": "50a7b2c6-0bc6-4214-b676-2fd1cce8198f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.368 2 DEBUG oslo_concurrency.lockutils [req-a56a99d1-345c-4847-922f-12ffdfe7da18 req-92938e97-2e6a-46ed-ba5a-516de7adcec6 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-3bef78b8-c7d4-43d6-a28f-39a5b4b4250a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.369 2 DEBUG nova.network.neutron [req-a56a99d1-345c-4847-922f-12ffdfe7da18 req-92938e97-2e6a-46ed-ba5a-516de7adcec6 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Refreshing network info cache for port 50a7b2c6-0bc6-4214-b676-2fd1cce8198f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.372 2 DEBUG nova.virt.libvirt.driver [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Start _get_guest_xml network_info=[{"id": "50a7b2c6-0bc6-4214-b676-2fd1cce8198f", "address": "fa:16:3e:d3:d5:b5", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a7b2c6-0b", "ovs_interfaceid": "50a7b2c6-0bc6-4214-b676-2fd1cce8198f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'boot_index': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': 'dcd9fbd3-16ab-46e1-976e-0576b433c9d5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.377 2 WARNING nova.virt.libvirt.driver [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.399 2 DEBUG nova.virt.libvirt.host [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.400 2 DEBUG nova.virt.libvirt.host [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.403 2 DEBUG nova.virt.libvirt.host [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.404 2 DEBUG nova.virt.libvirt.host [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.405 2 DEBUG nova.virt.libvirt.driver [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.405 2 DEBUG nova.virt.hardware [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-13T15:39:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ddff5c88-cac9-460e-8ffb-1e9b9a7c2a59',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.405 2 DEBUG nova.virt.hardware [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.406 2 DEBUG nova.virt.hardware [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.406 2 DEBUG nova.virt.hardware [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.406 2 DEBUG nova.virt.hardware [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.406 2 DEBUG nova.virt.hardware [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.406 2 DEBUG nova.virt.hardware [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.407 2 DEBUG nova.virt.hardware [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.407 2 DEBUG nova.virt.hardware [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.407 2 DEBUG nova.virt.hardware [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.407 2 DEBUG nova.virt.hardware [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.410 2 DEBUG nova.virt.libvirt.vif [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T15:44:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-808761554',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-808761554',id=6,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de23aa1f8b1f466e8bfa712e3140ce54',ramdisk_id='',reservation_id='r-zrk4mi2i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteActionsViaActuator-836873667',owner_user_name='tempest-TestExecuteActionsViaActuator-836873667-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:44:18Z,user_data=None,user_id='4732dfe3d815487f863c441d326f4231',uuid=3bef78b8-c7d4-43d6-a28f-39a5b4b4250a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50a7b2c6-0bc6-4214-b676-2fd1cce8198f", "address": "fa:16:3e:d3:d5:b5", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a7b2c6-0b", "ovs_interfaceid": "50a7b2c6-0bc6-4214-b676-2fd1cce8198f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.411 2 DEBUG nova.network.os_vif_util [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Converting VIF {"id": "50a7b2c6-0bc6-4214-b676-2fd1cce8198f", "address": "fa:16:3e:d3:d5:b5", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a7b2c6-0b", "ovs_interfaceid": "50a7b2c6-0bc6-4214-b676-2fd1cce8198f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.411 2 DEBUG nova.network.os_vif_util [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:d5:b5,bridge_name='br-int',has_traffic_filtering=True,id=50a7b2c6-0bc6-4214-b676-2fd1cce8198f,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50a7b2c6-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.412 2 DEBUG nova.objects.instance [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.443 2 DEBUG nova.virt.libvirt.driver [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] End _get_guest_xml xml=<domain type="kvm">
Oct 13 11:44:28 np0005485008 nova_compute[192512]:  <uuid>3bef78b8-c7d4-43d6-a28f-39a5b4b4250a</uuid>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:  <name>instance-00000006</name>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:  <memory>131072</memory>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:  <vcpu>1</vcpu>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:  <metadata>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <nova:name>tempest-TestExecuteActionsViaActuator-server-808761554</nova:name>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <nova:creationTime>2025-10-13 15:44:28</nova:creationTime>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <nova:flavor name="m1.nano">
Oct 13 11:44:28 np0005485008 nova_compute[192512]:        <nova:memory>128</nova:memory>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:        <nova:disk>1</nova:disk>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:        <nova:swap>0</nova:swap>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:        <nova:ephemeral>0</nova:ephemeral>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:        <nova:vcpus>1</nova:vcpus>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      </nova:flavor>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <nova:owner>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:        <nova:user uuid="4732dfe3d815487f863c441d326f4231">tempest-TestExecuteActionsViaActuator-836873667-project-admin</nova:user>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:        <nova:project uuid="de23aa1f8b1f466e8bfa712e3140ce54">tempest-TestExecuteActionsViaActuator-836873667</nova:project>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      </nova:owner>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <nova:root type="image" uuid="dcd9fbd3-16ab-46e1-976e-0576b433c9d5"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <nova:ports>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:        <nova:port uuid="50a7b2c6-0bc6-4214-b676-2fd1cce8198f">
Oct 13 11:44:28 np0005485008 nova_compute[192512]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:        </nova:port>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      </nova:ports>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    </nova:instance>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:  </metadata>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:  <sysinfo type="smbios">
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <system>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <entry name="manufacturer">RDO</entry>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <entry name="product">OpenStack Compute</entry>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <entry name="serial">3bef78b8-c7d4-43d6-a28f-39a5b4b4250a</entry>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <entry name="uuid">3bef78b8-c7d4-43d6-a28f-39a5b4b4250a</entry>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <entry name="family">Virtual Machine</entry>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    </system>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:  </sysinfo>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:  <os>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <boot dev="hd"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <smbios mode="sysinfo"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:  </os>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:  <features>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <acpi/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <apic/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <vmcoreinfo/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:  </features>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:  <clock offset="utc">
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <timer name="pit" tickpolicy="delay"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <timer name="hpet" present="no"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:  </clock>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:  <cpu mode="host-model" match="exact">
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <topology sockets="1" cores="1" threads="1"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:  </cpu>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:  <devices>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <disk type="file" device="disk">
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <driver name="qemu" type="qcow2" cache="none"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/3bef78b8-c7d4-43d6-a28f-39a5b4b4250a/disk"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <target dev="vda" bus="virtio"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    </disk>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <disk type="file" device="cdrom">
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <driver name="qemu" type="raw" cache="none"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/3bef78b8-c7d4-43d6-a28f-39a5b4b4250a/disk.config"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <target dev="sda" bus="sata"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    </disk>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <interface type="ethernet">
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <mac address="fa:16:3e:d3:d5:b5"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <driver name="vhost" rx_queue_size="512"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <mtu size="1442"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <target dev="tap50a7b2c6-0b"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    </interface>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <serial type="pty">
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <log file="/var/lib/nova/instances/3bef78b8-c7d4-43d6-a28f-39a5b4b4250a/console.log" append="off"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    </serial>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <video>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    </video>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <input type="tablet" bus="usb"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <rng model="virtio">
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <backend model="random">/dev/urandom</backend>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    </rng>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <controller type="usb" index="0"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    <memballoon model="virtio">
Oct 13 11:44:28 np0005485008 nova_compute[192512]:      <stats period="10"/>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:    </memballoon>
Oct 13 11:44:28 np0005485008 nova_compute[192512]:  </devices>
Oct 13 11:44:28 np0005485008 nova_compute[192512]: </domain>
Oct 13 11:44:28 np0005485008 nova_compute[192512]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.445 2 DEBUG nova.compute.manager [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Preparing to wait for external event network-vif-plugged-50a7b2c6-0bc6-4214-b676-2fd1cce8198f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.445 2 DEBUG oslo_concurrency.lockutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.445 2 DEBUG oslo_concurrency.lockutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.445 2 DEBUG oslo_concurrency.lockutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.446 2 DEBUG nova.virt.libvirt.vif [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T15:44:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-808761554',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-808761554',id=6,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de23aa1f8b1f466e8bfa712e3140ce54',ramdisk_id='',reservation_id='r-zrk4mi2i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteActionsViaActuator-836873667',owner_user_name='tempest-TestExecuteActionsViaActuator-836873667-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:44:18Z,user_data=None,user_id='4732dfe3d815487f863c441d326f4231',uuid=3bef78b8-c7d4-43d6-a28f-39a5b4b4250a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50a7b2c6-0bc6-4214-b676-2fd1cce8198f", "address": "fa:16:3e:d3:d5:b5", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a7b2c6-0b", "ovs_interfaceid": "50a7b2c6-0bc6-4214-b676-2fd1cce8198f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.446 2 DEBUG nova.network.os_vif_util [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Converting VIF {"id": "50a7b2c6-0bc6-4214-b676-2fd1cce8198f", "address": "fa:16:3e:d3:d5:b5", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a7b2c6-0b", "ovs_interfaceid": "50a7b2c6-0bc6-4214-b676-2fd1cce8198f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.447 2 DEBUG nova.network.os_vif_util [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:d5:b5,bridge_name='br-int',has_traffic_filtering=True,id=50a7b2c6-0bc6-4214-b676-2fd1cce8198f,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50a7b2c6-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.447 2 DEBUG os_vif [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:d5:b5,bridge_name='br-int',has_traffic_filtering=True,id=50a7b2c6-0bc6-4214-b676-2fd1cce8198f,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50a7b2c6-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.448 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.448 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.452 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50a7b2c6-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.453 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap50a7b2c6-0b, col_values=(('external_ids', {'iface-id': '50a7b2c6-0bc6-4214-b676-2fd1cce8198f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d3:d5:b5', 'vm-uuid': '3bef78b8-c7d4-43d6-a28f-39a5b4b4250a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:28 np0005485008 NetworkManager[51587]: <info>  [1760370268.4556] manager: (tap50a7b2c6-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.463 2 INFO os_vif [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:d5:b5,bridge_name='br-int',has_traffic_filtering=True,id=50a7b2c6-0bc6-4214-b676-2fd1cce8198f,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50a7b2c6-0b')#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.523 2 DEBUG nova.virt.libvirt.driver [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.524 2 DEBUG nova.virt.libvirt.driver [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.524 2 DEBUG nova.virt.libvirt.driver [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] No VIF found with MAC fa:16:3e:d3:d5:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 13 11:44:28 np0005485008 nova_compute[192512]: 2025-10-13 15:44:28.525 2 INFO nova.virt.libvirt.driver [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Using config drive#033[00m
Oct 13 11:44:29 np0005485008 nova_compute[192512]: 2025-10-13 15:44:29.334 2 INFO nova.virt.libvirt.driver [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Creating config drive at /var/lib/nova/instances/3bef78b8-c7d4-43d6-a28f-39a5b4b4250a/disk.config#033[00m
Oct 13 11:44:29 np0005485008 nova_compute[192512]: 2025-10-13 15:44:29.344 2 DEBUG oslo_concurrency.processutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3bef78b8-c7d4-43d6-a28f-39a5b4b4250a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppvlopcrp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:44:29 np0005485008 nova_compute[192512]: 2025-10-13 15:44:29.490 2 DEBUG oslo_concurrency.processutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3bef78b8-c7d4-43d6-a28f-39a5b4b4250a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppvlopcrp" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:44:29 np0005485008 kernel: tap50a7b2c6-0b: entered promiscuous mode
Oct 13 11:44:29 np0005485008 NetworkManager[51587]: <info>  [1760370269.5643] manager: (tap50a7b2c6-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/29)
Oct 13 11:44:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:44:29Z|00037|binding|INFO|Claiming lport 50a7b2c6-0bc6-4214-b676-2fd1cce8198f for this chassis.
Oct 13 11:44:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:44:29Z|00038|binding|INFO|50a7b2c6-0bc6-4214-b676-2fd1cce8198f: Claiming fa:16:3e:d3:d5:b5 10.100.0.13
Oct 13 11:44:29 np0005485008 nova_compute[192512]: 2025-10-13 15:44:29.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:44:29.583 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:d5:b5 10.100.0.13'], port_security=['fa:16:3e:d3:d5:b5 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3bef78b8-c7d4-43d6-a28f-39a5b4b4250a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de23aa1f8b1f466e8bfa712e3140ce54', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2246fb33-460f-432b-aa37-27ea09f6fda6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c73c3f50-1be2-4143-b9e3-a5e32e8515a9, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=50a7b2c6-0bc6-4214-b676-2fd1cce8198f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:44:29 np0005485008 nova_compute[192512]: 2025-10-13 15:44:29.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:44:29.585 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 50a7b2c6-0bc6-4214-b676-2fd1cce8198f in datapath 844f3185-3b42-4e49-9ef5-690ae5e238a0 bound to our chassis#033[00m
Oct 13 11:44:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:44:29Z|00039|binding|INFO|Setting lport 50a7b2c6-0bc6-4214-b676-2fd1cce8198f ovn-installed in OVS
Oct 13 11:44:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:44:29Z|00040|binding|INFO|Setting lport 50a7b2c6-0bc6-4214-b676-2fd1cce8198f up in Southbound
Oct 13 11:44:29 np0005485008 nova_compute[192512]: 2025-10-13 15:44:29.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:44:29.590 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 844f3185-3b42-4e49-9ef5-690ae5e238a0#033[00m
Oct 13 11:44:29 np0005485008 systemd-machined[152551]: New machine qemu-3-instance-00000006.
Oct 13 11:44:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:44:29.614 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[a89a1be7-cdb0-41de-ad07-690837eba222]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:44:29 np0005485008 systemd[1]: Started Virtual Machine qemu-3-instance-00000006.
Oct 13 11:44:29 np0005485008 systemd-udevd[215610]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:44:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:44:29.653 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[c90d57f7-038b-4ca3-8cc1-ae4391cae95d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:44:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:44:29.657 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[1333a820-b739-4683-ab9f-50117c763e33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:44:29 np0005485008 NetworkManager[51587]: <info>  [1760370269.6598] device (tap50a7b2c6-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 11:44:29 np0005485008 NetworkManager[51587]: <info>  [1760370269.6605] device (tap50a7b2c6-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 11:44:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:44:29.690 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[d8271eda-a8d4-4a37-9cf3-47cb2d4e33c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:44:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:44:29.716 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[4abedc09-b92a-4d87-81e1-ef561b43848f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap844f3185-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:dc:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 8, 'rx_bytes': 1000, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 371615, 'reachable_time': 18091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 215620, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:44:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:44:29.734 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[59010ec2-2fdd-42f4-be56-0be7d7d6564d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371629, 'tstamp': 371629}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 215622, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371633, 'tstamp': 371633}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 215622, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:44:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:44:29.736 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap844f3185-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:44:29 np0005485008 nova_compute[192512]: 2025-10-13 15:44:29.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:29 np0005485008 nova_compute[192512]: 2025-10-13 15:44:29.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:44:29.741 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap844f3185-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:44:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:44:29.741 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:44:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:44:29.742 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap844f3185-30, col_values=(('external_ids', {'iface-id': '94a96fe0-138f-4b15-b38e-a8f08a7e2933'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:44:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:44:29.742 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:44:30 np0005485008 nova_compute[192512]: 2025-10-13 15:44:30.337 2 DEBUG nova.compute.manager [req-31f93b18-bafc-4572-8a1c-94d4d55fb09e req-04f45269-9714-4ff4-90d4-d01936594be7 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Received event network-vif-plugged-50a7b2c6-0bc6-4214-b676-2fd1cce8198f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:44:30 np0005485008 nova_compute[192512]: 2025-10-13 15:44:30.337 2 DEBUG oslo_concurrency.lockutils [req-31f93b18-bafc-4572-8a1c-94d4d55fb09e req-04f45269-9714-4ff4-90d4-d01936594be7 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:44:30 np0005485008 nova_compute[192512]: 2025-10-13 15:44:30.338 2 DEBUG oslo_concurrency.lockutils [req-31f93b18-bafc-4572-8a1c-94d4d55fb09e req-04f45269-9714-4ff4-90d4-d01936594be7 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:44:30 np0005485008 nova_compute[192512]: 2025-10-13 15:44:30.338 2 DEBUG oslo_concurrency.lockutils [req-31f93b18-bafc-4572-8a1c-94d4d55fb09e req-04f45269-9714-4ff4-90d4-d01936594be7 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:44:30 np0005485008 nova_compute[192512]: 2025-10-13 15:44:30.338 2 DEBUG nova.compute.manager [req-31f93b18-bafc-4572-8a1c-94d4d55fb09e req-04f45269-9714-4ff4-90d4-d01936594be7 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Processing event network-vif-plugged-50a7b2c6-0bc6-4214-b676-2fd1cce8198f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 13 11:44:30 np0005485008 nova_compute[192512]: 2025-10-13 15:44:30.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:30 np0005485008 nova_compute[192512]: 2025-10-13 15:44:30.926 2 DEBUG nova.network.neutron [req-a56a99d1-345c-4847-922f-12ffdfe7da18 req-92938e97-2e6a-46ed-ba5a-516de7adcec6 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Updated VIF entry in instance network info cache for port 50a7b2c6-0bc6-4214-b676-2fd1cce8198f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 13 11:44:30 np0005485008 nova_compute[192512]: 2025-10-13 15:44:30.927 2 DEBUG nova.network.neutron [req-a56a99d1-345c-4847-922f-12ffdfe7da18 req-92938e97-2e6a-46ed-ba5a-516de7adcec6 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Updating instance_info_cache with network_info: [{"id": "50a7b2c6-0bc6-4214-b676-2fd1cce8198f", "address": "fa:16:3e:d3:d5:b5", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a7b2c6-0b", "ovs_interfaceid": "50a7b2c6-0bc6-4214-b676-2fd1cce8198f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:44:30 np0005485008 nova_compute[192512]: 2025-10-13 15:44:30.937 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370270.93687, 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:44:30 np0005485008 nova_compute[192512]: 2025-10-13 15:44:30.937 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] VM Started (Lifecycle Event)#033[00m
Oct 13 11:44:30 np0005485008 nova_compute[192512]: 2025-10-13 15:44:30.940 2 DEBUG nova.compute.manager [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 13 11:44:30 np0005485008 nova_compute[192512]: 2025-10-13 15:44:30.944 2 DEBUG nova.virt.libvirt.driver [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 13 11:44:30 np0005485008 nova_compute[192512]: 2025-10-13 15:44:30.950 2 INFO nova.virt.libvirt.driver [-] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Instance spawned successfully.#033[00m
Oct 13 11:44:30 np0005485008 nova_compute[192512]: 2025-10-13 15:44:30.950 2 DEBUG nova.virt.libvirt.driver [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 13 11:44:30 np0005485008 nova_compute[192512]: 2025-10-13 15:44:30.963 2 DEBUG oslo_concurrency.lockutils [req-a56a99d1-345c-4847-922f-12ffdfe7da18 req-92938e97-2e6a-46ed-ba5a-516de7adcec6 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-3bef78b8-c7d4-43d6-a28f-39a5b4b4250a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:44:30 np0005485008 nova_compute[192512]: 2025-10-13 15:44:30.983 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:44:30 np0005485008 nova_compute[192512]: 2025-10-13 15:44:30.990 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 11:44:31 np0005485008 nova_compute[192512]: 2025-10-13 15:44:31.000 2 DEBUG nova.virt.libvirt.driver [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:44:31 np0005485008 nova_compute[192512]: 2025-10-13 15:44:31.001 2 DEBUG nova.virt.libvirt.driver [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:44:31 np0005485008 nova_compute[192512]: 2025-10-13 15:44:31.001 2 DEBUG nova.virt.libvirt.driver [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:44:31 np0005485008 nova_compute[192512]: 2025-10-13 15:44:31.002 2 DEBUG nova.virt.libvirt.driver [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:44:31 np0005485008 nova_compute[192512]: 2025-10-13 15:44:31.003 2 DEBUG nova.virt.libvirt.driver [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:44:31 np0005485008 nova_compute[192512]: 2025-10-13 15:44:31.004 2 DEBUG nova.virt.libvirt.driver [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:44:31 np0005485008 nova_compute[192512]: 2025-10-13 15:44:31.020 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 13 11:44:31 np0005485008 nova_compute[192512]: 2025-10-13 15:44:31.021 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370270.9376552, 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:44:31 np0005485008 nova_compute[192512]: 2025-10-13 15:44:31.022 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] VM Paused (Lifecycle Event)#033[00m
Oct 13 11:44:31 np0005485008 nova_compute[192512]: 2025-10-13 15:44:31.045 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:44:31 np0005485008 nova_compute[192512]: 2025-10-13 15:44:31.050 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370270.9425082, 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:44:31 np0005485008 nova_compute[192512]: 2025-10-13 15:44:31.050 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] VM Resumed (Lifecycle Event)#033[00m
Oct 13 11:44:31 np0005485008 nova_compute[192512]: 2025-10-13 15:44:31.093 2 INFO nova.compute.manager [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Took 12.25 seconds to spawn the instance on the hypervisor.#033[00m
Oct 13 11:44:31 np0005485008 nova_compute[192512]: 2025-10-13 15:44:31.094 2 DEBUG nova.compute.manager [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:44:31 np0005485008 nova_compute[192512]: 2025-10-13 15:44:31.096 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:44:31 np0005485008 nova_compute[192512]: 2025-10-13 15:44:31.106 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 11:44:31 np0005485008 nova_compute[192512]: 2025-10-13 15:44:31.194 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 13 11:44:31 np0005485008 nova_compute[192512]: 2025-10-13 15:44:31.207 2 INFO nova.compute.manager [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Took 13.34 seconds to build instance.#033[00m
Oct 13 11:44:31 np0005485008 nova_compute[192512]: 2025-10-13 15:44:31.233 2 DEBUG oslo_concurrency.lockutils [None req-e33b20cc-6844-4661-9137-5a078d415e8f 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:44:33 np0005485008 nova_compute[192512]: 2025-10-13 15:44:33.107 2 DEBUG nova.compute.manager [req-6f350bef-cc6f-4c71-a30b-eee4784ef6f9 req-a7564c9e-16c0-4a20-9d70-8f6003553d31 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Received event network-vif-plugged-50a7b2c6-0bc6-4214-b676-2fd1cce8198f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:44:33 np0005485008 nova_compute[192512]: 2025-10-13 15:44:33.107 2 DEBUG oslo_concurrency.lockutils [req-6f350bef-cc6f-4c71-a30b-eee4784ef6f9 req-a7564c9e-16c0-4a20-9d70-8f6003553d31 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:44:33 np0005485008 nova_compute[192512]: 2025-10-13 15:44:33.108 2 DEBUG oslo_concurrency.lockutils [req-6f350bef-cc6f-4c71-a30b-eee4784ef6f9 req-a7564c9e-16c0-4a20-9d70-8f6003553d31 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:44:33 np0005485008 nova_compute[192512]: 2025-10-13 15:44:33.108 2 DEBUG oslo_concurrency.lockutils [req-6f350bef-cc6f-4c71-a30b-eee4784ef6f9 req-a7564c9e-16c0-4a20-9d70-8f6003553d31 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:44:33 np0005485008 nova_compute[192512]: 2025-10-13 15:44:33.109 2 DEBUG nova.compute.manager [req-6f350bef-cc6f-4c71-a30b-eee4784ef6f9 req-a7564c9e-16c0-4a20-9d70-8f6003553d31 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] No waiting events found dispatching network-vif-plugged-50a7b2c6-0bc6-4214-b676-2fd1cce8198f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:44:33 np0005485008 nova_compute[192512]: 2025-10-13 15:44:33.109 2 WARNING nova.compute.manager [req-6f350bef-cc6f-4c71-a30b-eee4784ef6f9 req-a7564c9e-16c0-4a20-9d70-8f6003553d31 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Received unexpected event network-vif-plugged-50a7b2c6-0bc6-4214-b676-2fd1cce8198f for instance with vm_state active and task_state None.#033[00m
Oct 13 11:44:33 np0005485008 nova_compute[192512]: 2025-10-13 15:44:33.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:44:33.947 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:44:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:44:33.948 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:44:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:44:33.948 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:44:35 np0005485008 podman[202884]: time="2025-10-13T15:44:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:44:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:44:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 11:44:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:44:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3458 "" "Go-http-client/1.1"
Oct 13 11:44:35 np0005485008 nova_compute[192512]: 2025-10-13 15:44:35.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:38 np0005485008 nova_compute[192512]: 2025-10-13 15:44:38.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:38 np0005485008 podman[215630]: 2025-10-13 15:44:38.772931501 +0000 UTC m=+0.061448829 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, release=1755695350, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=edpm, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Oct 13 11:44:40 np0005485008 nova_compute[192512]: 2025-10-13 15:44:40.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:43 np0005485008 nova_compute[192512]: 2025-10-13 15:44:43.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:43 np0005485008 ovn_controller[94758]: 2025-10-13T15:44:43Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d3:d5:b5 10.100.0.13
Oct 13 11:44:43 np0005485008 ovn_controller[94758]: 2025-10-13T15:44:43Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d3:d5:b5 10.100.0.13
Oct 13 11:44:45 np0005485008 nova_compute[192512]: 2025-10-13 15:44:45.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:48 np0005485008 nova_compute[192512]: 2025-10-13 15:44:48.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:48 np0005485008 nova_compute[192512]: 2025-10-13 15:44:48.854 2 DEBUG nova.virt.libvirt.driver [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Creating tmpfile /var/lib/nova/instances/tmp06fbsf_o to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Oct 13 11:44:48 np0005485008 nova_compute[192512]: 2025-10-13 15:44:48.986 2 DEBUG nova.compute.manager [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Oct 13 11:44:49 np0005485008 nova_compute[192512]: 2025-10-13 15:44:49.051 2 DEBUG nova.compute.manager [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=72704,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp06fbsf_o',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Oct 13 11:44:49 np0005485008 nova_compute[192512]: 2025-10-13 15:44:49.079 2 DEBUG oslo_concurrency.lockutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:44:49 np0005485008 nova_compute[192512]: 2025-10-13 15:44:49.080 2 DEBUG oslo_concurrency.lockutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:44:49 np0005485008 nova_compute[192512]: 2025-10-13 15:44:49.122 2 INFO nova.compute.rpcapi [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m
Oct 13 11:44:49 np0005485008 nova_compute[192512]: 2025-10-13 15:44:49.123 2 DEBUG oslo_concurrency.lockutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:44:49 np0005485008 nova_compute[192512]: 2025-10-13 15:44:49.212 2 DEBUG oslo_concurrency.lockutils [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:44:49 np0005485008 nova_compute[192512]: 2025-10-13 15:44:49.213 2 DEBUG oslo_concurrency.lockutils [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:44:49 np0005485008 nova_compute[192512]: 2025-10-13 15:44:49.340 2 DEBUG nova.objects.instance [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'pci_requests' on Instance uuid 525c99ab-3297-47fa-996f-96840e6855a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:44:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:44:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:44:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:44:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:44:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:44:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:44:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:44:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:44:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:44:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:44:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:44:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:44:49 np0005485008 nova_compute[192512]: 2025-10-13 15:44:49.622 2 DEBUG nova.virt.hardware [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 13 11:44:49 np0005485008 nova_compute[192512]: 2025-10-13 15:44:49.622 2 INFO nova.compute.claims [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct 13 11:44:49 np0005485008 nova_compute[192512]: 2025-10-13 15:44:49.623 2 DEBUG nova.objects.instance [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'resources' on Instance uuid 525c99ab-3297-47fa-996f-96840e6855a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:44:49 np0005485008 nova_compute[192512]: 2025-10-13 15:44:49.648 2 DEBUG nova.objects.instance [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'numa_topology' on Instance uuid 525c99ab-3297-47fa-996f-96840e6855a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:44:49 np0005485008 nova_compute[192512]: 2025-10-13 15:44:49.709 2 DEBUG nova.objects.instance [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 525c99ab-3297-47fa-996f-96840e6855a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:44:49 np0005485008 nova_compute[192512]: 2025-10-13 15:44:49.814 2 INFO nova.compute.resource_tracker [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Updating resource usage from migration 9fdfca36-1ac1-4433-9998-53818a892917#033[00m
Oct 13 11:44:49 np0005485008 nova_compute[192512]: 2025-10-13 15:44:49.815 2 DEBUG nova.compute.resource_tracker [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Starting to track incoming migration 9fdfca36-1ac1-4433-9998-53818a892917 with flavor ddff5c88-cac9-460e-8ffb-1e9b9a7c2a59 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Oct 13 11:44:50 np0005485008 nova_compute[192512]: 2025-10-13 15:44:50.070 2 DEBUG nova.compute.provider_tree [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:44:50 np0005485008 nova_compute[192512]: 2025-10-13 15:44:50.172 2 DEBUG nova.scheduler.client.report [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:44:50 np0005485008 nova_compute[192512]: 2025-10-13 15:44:50.206 2 DEBUG oslo_concurrency.lockutils [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:44:50 np0005485008 nova_compute[192512]: 2025-10-13 15:44:50.206 2 INFO nova.compute.manager [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Migrating#033[00m
Oct 13 11:44:50 np0005485008 nova_compute[192512]: 2025-10-13 15:44:50.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:51 np0005485008 nova_compute[192512]: 2025-10-13 15:44:51.005 2 DEBUG nova.compute.manager [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=72704,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp06fbsf_o',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='d31c495b-2e0f-4da1-bc80-cf4628fd772e',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Oct 13 11:44:51 np0005485008 nova_compute[192512]: 2025-10-13 15:44:51.064 2 DEBUG oslo_concurrency.lockutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-d31c495b-2e0f-4da1-bc80-cf4628fd772e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:44:51 np0005485008 nova_compute[192512]: 2025-10-13 15:44:51.065 2 DEBUG oslo_concurrency.lockutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-d31c495b-2e0f-4da1-bc80-cf4628fd772e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:44:51 np0005485008 nova_compute[192512]: 2025-10-13 15:44:51.065 2 DEBUG nova.network.neutron [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 11:44:52 np0005485008 systemd[1]: Created slice User Slice of UID 42436.
Oct 13 11:44:52 np0005485008 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct 13 11:44:52 np0005485008 systemd-logind[784]: New session 30 of user nova.
Oct 13 11:44:52 np0005485008 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct 13 11:44:52 np0005485008 systemd[1]: Starting User Manager for UID 42436...
Oct 13 11:44:52 np0005485008 systemd[215673]: Queued start job for default target Main User Target.
Oct 13 11:44:52 np0005485008 systemd[215673]: Created slice User Application Slice.
Oct 13 11:44:52 np0005485008 systemd[215673]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 13 11:44:52 np0005485008 systemd[215673]: Started Daily Cleanup of User's Temporary Directories.
Oct 13 11:44:52 np0005485008 systemd[215673]: Reached target Paths.
Oct 13 11:44:52 np0005485008 systemd[215673]: Reached target Timers.
Oct 13 11:44:52 np0005485008 systemd[215673]: Starting D-Bus User Message Bus Socket...
Oct 13 11:44:52 np0005485008 systemd[215673]: Starting Create User's Volatile Files and Directories...
Oct 13 11:44:52 np0005485008 systemd[215673]: Finished Create User's Volatile Files and Directories.
Oct 13 11:44:52 np0005485008 systemd[215673]: Listening on D-Bus User Message Bus Socket.
Oct 13 11:44:52 np0005485008 systemd[215673]: Reached target Sockets.
Oct 13 11:44:52 np0005485008 systemd[215673]: Reached target Basic System.
Oct 13 11:44:52 np0005485008 systemd[1]: Started User Manager for UID 42436.
Oct 13 11:44:52 np0005485008 systemd[215673]: Reached target Main User Target.
Oct 13 11:44:52 np0005485008 systemd[215673]: Startup finished in 161ms.
Oct 13 11:44:52 np0005485008 systemd[1]: Started Session 30 of User nova.
Oct 13 11:44:53 np0005485008 systemd[1]: session-30.scope: Deactivated successfully.
Oct 13 11:44:53 np0005485008 systemd-logind[784]: Session 30 logged out. Waiting for processes to exit.
Oct 13 11:44:53 np0005485008 systemd-logind[784]: Removed session 30.
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.141 2 DEBUG nova.network.neutron [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Updating instance_info_cache with network_info: [{"id": "74a8a863-ae9d-45d4-ab72-b3d0e08d02f2", "address": "fa:16:3e:15:93:1a", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74a8a863-ae", "ovs_interfaceid": "74a8a863-ae9d-45d4-ab72-b3d0e08d02f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:44:53 np0005485008 systemd-logind[784]: New session 32 of user nova.
Oct 13 11:44:53 np0005485008 systemd[1]: Started Session 32 of User nova.
Oct 13 11:44:53 np0005485008 systemd[1]: session-32.scope: Deactivated successfully.
Oct 13 11:44:53 np0005485008 systemd-logind[784]: Session 32 logged out. Waiting for processes to exit.
Oct 13 11:44:53 np0005485008 systemd-logind[784]: Removed session 32.
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.259 2 DEBUG oslo_concurrency.lockutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-d31c495b-2e0f-4da1-bc80-cf4628fd772e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.261 2 DEBUG nova.virt.libvirt.driver [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=72704,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp06fbsf_o',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='d31c495b-2e0f-4da1-bc80-cf4628fd772e',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.261 2 DEBUG nova.virt.libvirt.driver [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Creating instance directory: /var/lib/nova/instances/d31c495b-2e0f-4da1-bc80-cf4628fd772e pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.261 2 DEBUG nova.virt.libvirt.driver [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Creating disk.info with the contents: {'/var/lib/nova/instances/d31c495b-2e0f-4da1-bc80-cf4628fd772e/disk': 'qcow2', '/var/lib/nova/instances/d31c495b-2e0f-4da1-bc80-cf4628fd772e/disk.config': 'raw'} pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10854#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.262 2 DEBUG nova.virt.libvirt.driver [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10864#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.263 2 DEBUG nova.objects.instance [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid d31c495b-2e0f-4da1-bc80-cf4628fd772e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.343 2 DEBUG oslo_concurrency.processutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.428 2 DEBUG oslo_concurrency.processutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.429 2 DEBUG oslo_concurrency.lockutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.430 2 DEBUG oslo_concurrency.lockutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.445 2 DEBUG oslo_concurrency.processutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.535 2 DEBUG oslo_concurrency.processutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.536 2 DEBUG oslo_concurrency.processutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/d31c495b-2e0f-4da1-bc80-cf4628fd772e/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.579 2 DEBUG oslo_concurrency.processutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/d31c495b-2e0f-4da1-bc80-cf4628fd772e/disk 1073741824" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.580 2 DEBUG oslo_concurrency.lockutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.580 2 DEBUG oslo_concurrency.processutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.648 2 DEBUG oslo_concurrency.processutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.650 2 DEBUG nova.virt.disk.api [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Checking if we can resize image /var/lib/nova/instances/d31c495b-2e0f-4da1-bc80-cf4628fd772e/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.650 2 DEBUG oslo_concurrency.processutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d31c495b-2e0f-4da1-bc80-cf4628fd772e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.710 2 DEBUG oslo_concurrency.processutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d31c495b-2e0f-4da1-bc80-cf4628fd772e/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.712 2 DEBUG nova.virt.disk.api [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Cannot resize image /var/lib/nova/instances/d31c495b-2e0f-4da1-bc80-cf4628fd772e/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.712 2 DEBUG nova.objects.instance [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'migration_context' on Instance uuid d31c495b-2e0f-4da1-bc80-cf4628fd772e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.842 2 DEBUG oslo_concurrency.processutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/d31c495b-2e0f-4da1-bc80-cf4628fd772e/disk.config 485376 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.866 2 DEBUG oslo_concurrency.processutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/d31c495b-2e0f-4da1-bc80-cf4628fd772e/disk.config 485376" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.869 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Copying file compute-0.ctlplane.example.com:/var/lib/nova/instances/d31c495b-2e0f-4da1-bc80-cf4628fd772e/disk.config to /var/lib/nova/instances/d31c495b-2e0f-4da1-bc80-cf4628fd772e copy_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:103#033[00m
Oct 13 11:44:53 np0005485008 nova_compute[192512]: 2025-10-13 15:44:53.869 2 DEBUG oslo_concurrency.processutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/d31c495b-2e0f-4da1-bc80-cf4628fd772e/disk.config /var/lib/nova/instances/d31c495b-2e0f-4da1-bc80-cf4628fd772e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:44:54 np0005485008 nova_compute[192512]: 2025-10-13 15:44:54.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:44:54 np0005485008 nova_compute[192512]: 2025-10-13 15:44:54.460 2 DEBUG oslo_concurrency.processutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/d31c495b-2e0f-4da1-bc80-cf4628fd772e/disk.config /var/lib/nova/instances/d31c495b-2e0f-4da1-bc80-cf4628fd772e" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:44:54 np0005485008 nova_compute[192512]: 2025-10-13 15:44:54.460 2 DEBUG nova.virt.libvirt.driver [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Oct 13 11:44:54 np0005485008 nova_compute[192512]: 2025-10-13 15:44:54.462 2 DEBUG nova.virt.libvirt.vif [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T15:43:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-911356362',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-911356362',id=3,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T15:43:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='de23aa1f8b1f466e8bfa712e3140ce54',ramdisk_id='',reservation_id='r-cb2dlqyx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteActionsViaActuator-836873667',owner_user_name='tempest-TestExecuteActionsViaActuator-836873667-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:43:35Z,user_data=None,user_id='4732dfe3d815487f863c441d326f4231',uuid=d31c495b-2e0f-4da1-bc80-cf4628fd772e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "74a8a863-ae9d-45d4-ab72-b3d0e08d02f2", "address": "fa:16:3e:15:93:1a", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap74a8a863-ae", "ovs_interfaceid": "74a8a863-ae9d-45d4-ab72-b3d0e08d02f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 11:44:54 np0005485008 nova_compute[192512]: 2025-10-13 15:44:54.462 2 DEBUG nova.network.os_vif_util [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converting VIF {"id": "74a8a863-ae9d-45d4-ab72-b3d0e08d02f2", "address": "fa:16:3e:15:93:1a", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap74a8a863-ae", "ovs_interfaceid": "74a8a863-ae9d-45d4-ab72-b3d0e08d02f2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:44:54 np0005485008 nova_compute[192512]: 2025-10-13 15:44:54.463 2 DEBUG nova.network.os_vif_util [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:93:1a,bridge_name='br-int',has_traffic_filtering=True,id=74a8a863-ae9d-45d4-ab72-b3d0e08d02f2,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74a8a863-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:44:54 np0005485008 nova_compute[192512]: 2025-10-13 15:44:54.463 2 DEBUG os_vif [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:93:1a,bridge_name='br-int',has_traffic_filtering=True,id=74a8a863-ae9d-45d4-ab72-b3d0e08d02f2,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74a8a863-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 11:44:54 np0005485008 nova_compute[192512]: 2025-10-13 15:44:54.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:54 np0005485008 nova_compute[192512]: 2025-10-13 15:44:54.464 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:44:54 np0005485008 nova_compute[192512]: 2025-10-13 15:44:54.465 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:44:54 np0005485008 nova_compute[192512]: 2025-10-13 15:44:54.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:54 np0005485008 nova_compute[192512]: 2025-10-13 15:44:54.469 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74a8a863-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:44:54 np0005485008 nova_compute[192512]: 2025-10-13 15:44:54.470 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap74a8a863-ae, col_values=(('external_ids', {'iface-id': '74a8a863-ae9d-45d4-ab72-b3d0e08d02f2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:93:1a', 'vm-uuid': 'd31c495b-2e0f-4da1-bc80-cf4628fd772e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:44:54 np0005485008 nova_compute[192512]: 2025-10-13 15:44:54.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:54 np0005485008 NetworkManager[51587]: <info>  [1760370294.4733] manager: (tap74a8a863-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Oct 13 11:44:54 np0005485008 nova_compute[192512]: 2025-10-13 15:44:54.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 11:44:54 np0005485008 nova_compute[192512]: 2025-10-13 15:44:54.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:54 np0005485008 nova_compute[192512]: 2025-10-13 15:44:54.481 2 INFO os_vif [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:93:1a,bridge_name='br-int',has_traffic_filtering=True,id=74a8a863-ae9d-45d4-ab72-b3d0e08d02f2,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74a8a863-ae')#033[00m
Oct 13 11:44:54 np0005485008 nova_compute[192512]: 2025-10-13 15:44:54.482 2 DEBUG nova.virt.libvirt.driver [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Oct 13 11:44:54 np0005485008 nova_compute[192512]: 2025-10-13 15:44:54.482 2 DEBUG nova.compute.manager [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=72704,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp06fbsf_o',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='d31c495b-2e0f-4da1-bc80-cf4628fd772e',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Oct 13 11:44:55 np0005485008 nova_compute[192512]: 2025-10-13 15:44:55.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:55 np0005485008 podman[215718]: 2025-10-13 15:44:55.800096501 +0000 UTC m=+0.085219017 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 11:44:55 np0005485008 podman[215719]: 2025-10-13 15:44:55.809228065 +0000 UTC m=+0.076818307 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 11:44:55 np0005485008 podman[215717]: 2025-10-13 15:44:55.836995168 +0000 UTC m=+0.121897148 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:44:55 np0005485008 podman[215720]: 2025-10-13 15:44:55.838451442 +0000 UTC m=+0.106855979 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 13 11:44:55 np0005485008 nova_compute[192512]: 2025-10-13 15:44:55.979 2 DEBUG nova.network.neutron [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Port 74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Oct 13 11:44:55 np0005485008 nova_compute[192512]: 2025-10-13 15:44:55.981 2 DEBUG nova.compute.manager [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=72704,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp06fbsf_o',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='d31c495b-2e0f-4da1-bc80-cf4628fd772e',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Oct 13 11:44:56 np0005485008 systemd[1]: Starting libvirt proxy daemon...
Oct 13 11:44:56 np0005485008 nova_compute[192512]: 2025-10-13 15:44:56.137 2 DEBUG nova.compute.manager [req-983b81e0-19a9-4b9b-b540-6d49f4774fca req-1045854a-42db-46ae-a493-3dfe4bfc36eb 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Received event network-vif-unplugged-c54b7ae5-4e02-4f53-aa67-fcd3b588177f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:44:56 np0005485008 nova_compute[192512]: 2025-10-13 15:44:56.138 2 DEBUG oslo_concurrency.lockutils [req-983b81e0-19a9-4b9b-b540-6d49f4774fca req-1045854a-42db-46ae-a493-3dfe4bfc36eb 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "525c99ab-3297-47fa-996f-96840e6855a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:44:56 np0005485008 nova_compute[192512]: 2025-10-13 15:44:56.139 2 DEBUG oslo_concurrency.lockutils [req-983b81e0-19a9-4b9b-b540-6d49f4774fca req-1045854a-42db-46ae-a493-3dfe4bfc36eb 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "525c99ab-3297-47fa-996f-96840e6855a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:44:56 np0005485008 nova_compute[192512]: 2025-10-13 15:44:56.139 2 DEBUG oslo_concurrency.lockutils [req-983b81e0-19a9-4b9b-b540-6d49f4774fca req-1045854a-42db-46ae-a493-3dfe4bfc36eb 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "525c99ab-3297-47fa-996f-96840e6855a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:44:56 np0005485008 nova_compute[192512]: 2025-10-13 15:44:56.139 2 DEBUG nova.compute.manager [req-983b81e0-19a9-4b9b-b540-6d49f4774fca req-1045854a-42db-46ae-a493-3dfe4bfc36eb 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] No waiting events found dispatching network-vif-unplugged-c54b7ae5-4e02-4f53-aa67-fcd3b588177f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:44:56 np0005485008 nova_compute[192512]: 2025-10-13 15:44:56.139 2 WARNING nova.compute.manager [req-983b81e0-19a9-4b9b-b540-6d49f4774fca req-1045854a-42db-46ae-a493-3dfe4bfc36eb 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Received unexpected event network-vif-unplugged-c54b7ae5-4e02-4f53-aa67-fcd3b588177f for instance with vm_state active and task_state resize_migrating.#033[00m
Oct 13 11:44:56 np0005485008 systemd[1]: Started libvirt proxy daemon.
Oct 13 11:44:56 np0005485008 NetworkManager[51587]: <info>  [1760370296.2978] manager: (tap74a8a863-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/31)
Oct 13 11:44:56 np0005485008 kernel: tap74a8a863-ae: entered promiscuous mode
Oct 13 11:44:56 np0005485008 ovn_controller[94758]: 2025-10-13T15:44:56Z|00041|binding|INFO|Claiming lport 74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 for this additional chassis.
Oct 13 11:44:56 np0005485008 ovn_controller[94758]: 2025-10-13T15:44:56Z|00042|binding|INFO|74a8a863-ae9d-45d4-ab72-b3d0e08d02f2: Claiming fa:16:3e:15:93:1a 10.100.0.8
Oct 13 11:44:56 np0005485008 nova_compute[192512]: 2025-10-13 15:44:56.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:56 np0005485008 nova_compute[192512]: 2025-10-13 15:44:56.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:56 np0005485008 ovn_controller[94758]: 2025-10-13T15:44:56Z|00043|binding|INFO|Setting lport 74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 ovn-installed in OVS
Oct 13 11:44:56 np0005485008 nova_compute[192512]: 2025-10-13 15:44:56.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:56 np0005485008 nova_compute[192512]: 2025-10-13 15:44:56.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:44:56 np0005485008 systemd-udevd[215832]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:44:56 np0005485008 systemd-machined[152551]: New machine qemu-4-instance-00000003.
Oct 13 11:44:56 np0005485008 NetworkManager[51587]: <info>  [1760370296.3640] device (tap74a8a863-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 11:44:56 np0005485008 NetworkManager[51587]: <info>  [1760370296.3650] device (tap74a8a863-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 11:44:56 np0005485008 systemd[1]: Started Virtual Machine qemu-4-instance-00000003.
Oct 13 11:44:56 np0005485008 systemd-logind[784]: New session 33 of user nova.
Oct 13 11:44:56 np0005485008 systemd[1]: Started Session 33 of User nova.
Oct 13 11:44:57 np0005485008 systemd[1]: session-33.scope: Deactivated successfully.
Oct 13 11:44:57 np0005485008 systemd-logind[784]: Session 33 logged out. Waiting for processes to exit.
Oct 13 11:44:57 np0005485008 systemd-logind[784]: Removed session 33.
Oct 13 11:44:57 np0005485008 nova_compute[192512]: 2025-10-13 15:44:57.142 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370297.1407006, d31c495b-2e0f-4da1-bc80-cf4628fd772e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:44:57 np0005485008 nova_compute[192512]: 2025-10-13 15:44:57.142 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] VM Started (Lifecycle Event)#033[00m
Oct 13 11:44:57 np0005485008 systemd-logind[784]: New session 34 of user nova.
Oct 13 11:44:57 np0005485008 podman[215854]: 2025-10-13 15:44:57.155872381 +0000 UTC m=+0.105525918 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid)
Oct 13 11:44:57 np0005485008 systemd[1]: Started Session 34 of User nova.
Oct 13 11:44:57 np0005485008 nova_compute[192512]: 2025-10-13 15:44:57.174 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:44:57 np0005485008 systemd[1]: session-34.scope: Deactivated successfully.
Oct 13 11:44:57 np0005485008 systemd-logind[784]: Session 34 logged out. Waiting for processes to exit.
Oct 13 11:44:57 np0005485008 systemd-logind[784]: Removed session 34.
Oct 13 11:44:57 np0005485008 systemd-logind[784]: New session 35 of user nova.
Oct 13 11:44:57 np0005485008 systemd[1]: Started Session 35 of User nova.
Oct 13 11:44:57 np0005485008 systemd[1]: session-35.scope: Deactivated successfully.
Oct 13 11:44:57 np0005485008 systemd-logind[784]: Session 35 logged out. Waiting for processes to exit.
Oct 13 11:44:57 np0005485008 systemd-logind[784]: Removed session 35.
Oct 13 11:44:57 np0005485008 nova_compute[192512]: 2025-10-13 15:44:57.825 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370297.8253794, d31c495b-2e0f-4da1-bc80-cf4628fd772e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:44:57 np0005485008 nova_compute[192512]: 2025-10-13 15:44:57.826 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] VM Resumed (Lifecycle Event)#033[00m
Oct 13 11:44:57 np0005485008 nova_compute[192512]: 2025-10-13 15:44:57.903 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:44:57 np0005485008 nova_compute[192512]: 2025-10-13 15:44:57.908 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 11:44:57 np0005485008 nova_compute[192512]: 2025-10-13 15:44:57.974 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct 13 11:44:58 np0005485008 nova_compute[192512]: 2025-10-13 15:44:58.477 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:44:59 np0005485008 nova_compute[192512]: 2025-10-13 15:44:59.213 2 INFO nova.network.neutron [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Updating port c54b7ae5-4e02-4f53-aa67-fcd3b588177f with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Oct 13 11:44:59 np0005485008 nova_compute[192512]: 2025-10-13 15:44:59.444 2 DEBUG nova.compute.manager [req-f4f09f06-93de-49f1-9e0a-148d7d139198 req-53e61bb3-9dbe-44da-9b4e-e821b35ae06a 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Received event network-vif-plugged-c54b7ae5-4e02-4f53-aa67-fcd3b588177f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:44:59 np0005485008 nova_compute[192512]: 2025-10-13 15:44:59.445 2 DEBUG oslo_concurrency.lockutils [req-f4f09f06-93de-49f1-9e0a-148d7d139198 req-53e61bb3-9dbe-44da-9b4e-e821b35ae06a 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "525c99ab-3297-47fa-996f-96840e6855a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:44:59 np0005485008 nova_compute[192512]: 2025-10-13 15:44:59.446 2 DEBUG oslo_concurrency.lockutils [req-f4f09f06-93de-49f1-9e0a-148d7d139198 req-53e61bb3-9dbe-44da-9b4e-e821b35ae06a 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "525c99ab-3297-47fa-996f-96840e6855a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:44:59 np0005485008 nova_compute[192512]: 2025-10-13 15:44:59.446 2 DEBUG oslo_concurrency.lockutils [req-f4f09f06-93de-49f1-9e0a-148d7d139198 req-53e61bb3-9dbe-44da-9b4e-e821b35ae06a 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "525c99ab-3297-47fa-996f-96840e6855a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:44:59 np0005485008 nova_compute[192512]: 2025-10-13 15:44:59.446 2 DEBUG nova.compute.manager [req-f4f09f06-93de-49f1-9e0a-148d7d139198 req-53e61bb3-9dbe-44da-9b4e-e821b35ae06a 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] No waiting events found dispatching network-vif-plugged-c54b7ae5-4e02-4f53-aa67-fcd3b588177f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:44:59 np0005485008 nova_compute[192512]: 2025-10-13 15:44:59.446 2 WARNING nova.compute.manager [req-f4f09f06-93de-49f1-9e0a-148d7d139198 req-53e61bb3-9dbe-44da-9b4e-e821b35ae06a 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Received unexpected event network-vif-plugged-c54b7ae5-4e02-4f53-aa67-fcd3b588177f for instance with vm_state active and task_state resize_migrated.#033[00m
Oct 13 11:44:59 np0005485008 nova_compute[192512]: 2025-10-13 15:44:59.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:00 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:00Z|00044|binding|INFO|Claiming lport 74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 for this chassis.
Oct 13 11:45:00 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:00Z|00045|binding|INFO|74a8a863-ae9d-45d4-ab72-b3d0e08d02f2: Claiming fa:16:3e:15:93:1a 10.100.0.8
Oct 13 11:45:00 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:00Z|00046|binding|INFO|Setting lport 74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 up in Southbound
Oct 13 11:45:00 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:00.374 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:93:1a 10.100.0.8'], port_security=['fa:16:3e:15:93:1a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd31c495b-2e0f-4da1-bc80-cf4628fd772e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de23aa1f8b1f466e8bfa712e3140ce54', 'neutron:revision_number': '11', 'neutron:security_group_ids': '2246fb33-460f-432b-aa37-27ea09f6fda6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c73c3f50-1be2-4143-b9e3-a5e32e8515a9, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=74a8a863-ae9d-45d4-ab72-b3d0e08d02f2) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:45:00 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:00.376 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 in datapath 844f3185-3b42-4e49-9ef5-690ae5e238a0 bound to our chassis#033[00m
Oct 13 11:45:00 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:00.377 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 844f3185-3b42-4e49-9ef5-690ae5e238a0#033[00m
Oct 13 11:45:00 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:00.392 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[34204034-e9a9-47ed-a497-b98fc0c1b97a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:00 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:00.416 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[304442a5-1111-48b4-8ff1-5cbd1d149c73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:00 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:00.419 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[b826f729-593a-4feb-8b51-143ef86137b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:00 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:00.448 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[b4958b99-6f05-404a-99ed-c9318b18813a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:00 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:00.469 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[685196cf-dc5c-43bd-85f4-07770c8147f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap844f3185-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:dc:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 10, 'rx_bytes': 1294, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 10, 'rx_bytes': 1294, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 371615, 'reachable_time': 18091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 215902, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:00 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:00.489 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[1959e21e-a520-49d6-9631-ea561f6cb4d4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371629, 'tstamp': 371629}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 215903, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371633, 'tstamp': 371633}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 215903, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:00 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:00.491 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap844f3185-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:00 np0005485008 nova_compute[192512]: 2025-10-13 15:45:00.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:00 np0005485008 nova_compute[192512]: 2025-10-13 15:45:00.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:00 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:00.495 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap844f3185-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:00 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:00.495 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:45:00 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:00.495 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap844f3185-30, col_values=(('external_ids', {'iface-id': '94a96fe0-138f-4b15-b38e-a8f08a7e2933'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:00 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:00.496 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:45:00 np0005485008 nova_compute[192512]: 2025-10-13 15:45:00.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:00 np0005485008 nova_compute[192512]: 2025-10-13 15:45:00.791 2 DEBUG oslo_concurrency.lockutils [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-525c99ab-3297-47fa-996f-96840e6855a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:45:00 np0005485008 nova_compute[192512]: 2025-10-13 15:45:00.792 2 DEBUG oslo_concurrency.lockutils [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-525c99ab-3297-47fa-996f-96840e6855a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:45:00 np0005485008 nova_compute[192512]: 2025-10-13 15:45:00.792 2 DEBUG nova.network.neutron [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 11:45:01 np0005485008 nova_compute[192512]: 2025-10-13 15:45:01.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:45:01 np0005485008 nova_compute[192512]: 2025-10-13 15:45:01.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 11:45:01 np0005485008 nova_compute[192512]: 2025-10-13 15:45:01.462 2 INFO nova.compute.manager [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Post operation of migration started#033[00m
Oct 13 11:45:02 np0005485008 nova_compute[192512]: 2025-10-13 15:45:02.189 2 DEBUG nova.compute.manager [req-16fbfa9a-203f-46d9-8091-1d63da87d478 req-f1527191-27dd-4e14-9c27-fa01c9aa093d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Received event network-changed-c54b7ae5-4e02-4f53-aa67-fcd3b588177f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:45:02 np0005485008 nova_compute[192512]: 2025-10-13 15:45:02.192 2 DEBUG nova.compute.manager [req-16fbfa9a-203f-46d9-8091-1d63da87d478 req-f1527191-27dd-4e14-9c27-fa01c9aa093d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Refreshing instance network info cache due to event network-changed-c54b7ae5-4e02-4f53-aa67-fcd3b588177f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 13 11:45:02 np0005485008 nova_compute[192512]: 2025-10-13 15:45:02.192 2 DEBUG oslo_concurrency.lockutils [req-16fbfa9a-203f-46d9-8091-1d63da87d478 req-f1527191-27dd-4e14-9c27-fa01c9aa093d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-525c99ab-3297-47fa-996f-96840e6855a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:45:03 np0005485008 nova_compute[192512]: 2025-10-13 15:45:03.147 2 DEBUG oslo_concurrency.lockutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-d31c495b-2e0f-4da1-bc80-cf4628fd772e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:45:03 np0005485008 nova_compute[192512]: 2025-10-13 15:45:03.147 2 DEBUG oslo_concurrency.lockutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-d31c495b-2e0f-4da1-bc80-cf4628fd772e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:45:03 np0005485008 nova_compute[192512]: 2025-10-13 15:45:03.148 2 DEBUG nova.network.neutron [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 11:45:03 np0005485008 nova_compute[192512]: 2025-10-13 15:45:03.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:45:03 np0005485008 nova_compute[192512]: 2025-10-13 15:45:03.430 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.148 2 DEBUG nova.network.neutron [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Updating instance_info_cache with network_info: [{"id": "c54b7ae5-4e02-4f53-aa67-fcd3b588177f", "address": "fa:16:3e:9c:c3:97", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc54b7ae5-4e", "ovs_interfaceid": "c54b7ae5-4e02-4f53-aa67-fcd3b588177f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.191 2 DEBUG oslo_concurrency.lockutils [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-525c99ab-3297-47fa-996f-96840e6855a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.196 2 DEBUG oslo_concurrency.lockutils [req-16fbfa9a-203f-46d9-8091-1d63da87d478 req-f1527191-27dd-4e14-9c27-fa01c9aa093d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-525c99ab-3297-47fa-996f-96840e6855a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.196 2 DEBUG nova.network.neutron [req-16fbfa9a-203f-46d9-8091-1d63da87d478 req-f1527191-27dd-4e14-9c27-fa01c9aa093d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Refreshing network info cache for port c54b7ae5-4e02-4f53-aa67-fcd3b588177f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.426 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.426 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.471 2 DEBUG nova.virt.libvirt.driver [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.474 2 DEBUG nova.virt.libvirt.driver [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.474 2 INFO nova.virt.libvirt.driver [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Creating image(s)#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.475 2 DEBUG nova.objects.instance [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 525c99ab-3297-47fa-996f-96840e6855a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.514 2 DEBUG oslo_concurrency.processutils [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.592 2 DEBUG oslo_concurrency.processutils [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.594 2 DEBUG nova.virt.disk.api [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Checking if we can resize image /var/lib/nova/instances/525c99ab-3297-47fa-996f-96840e6855a5/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.595 2 DEBUG oslo_concurrency.processutils [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/525c99ab-3297-47fa-996f-96840e6855a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.653 2 DEBUG oslo_concurrency.processutils [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/525c99ab-3297-47fa-996f-96840e6855a5/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.655 2 DEBUG nova.virt.disk.api [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Cannot resize image /var/lib/nova/instances/525c99ab-3297-47fa-996f-96840e6855a5/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.676 2 DEBUG nova.virt.libvirt.driver [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.677 2 DEBUG nova.virt.libvirt.driver [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Ensure instance console log exists: /var/lib/nova/instances/525c99ab-3297-47fa-996f-96840e6855a5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.678 2 DEBUG oslo_concurrency.lockutils [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.678 2 DEBUG oslo_concurrency.lockutils [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.678 2 DEBUG oslo_concurrency.lockutils [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.682 2 DEBUG nova.virt.libvirt.driver [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Start _get_guest_xml network_info=[{"id": "c54b7ae5-4e02-4f53-aa67-fcd3b588177f", "address": "fa:16:3e:9c:c3:97", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "vif_mac": "fa:16:3e:9c:c3:97"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc54b7ae5-4e", "ovs_interfaceid": "c54b7ae5-4e02-4f53-aa67-fcd3b588177f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'boot_index': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': 'dcd9fbd3-16ab-46e1-976e-0576b433c9d5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.689 2 WARNING nova.virt.libvirt.driver [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.696 2 DEBUG nova.virt.libvirt.host [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.697 2 DEBUG nova.virt.libvirt.host [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.701 2 DEBUG nova.virt.libvirt.host [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.701 2 DEBUG nova.virt.libvirt.host [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.702 2 DEBUG nova.virt.libvirt.driver [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.702 2 DEBUG nova.virt.hardware [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-13T15:39:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ddff5c88-cac9-460e-8ffb-1e9b9a7c2a59',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.703 2 DEBUG nova.virt.hardware [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.703 2 DEBUG nova.virt.hardware [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.704 2 DEBUG nova.virt.hardware [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.704 2 DEBUG nova.virt.hardware [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.704 2 DEBUG nova.virt.hardware [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.704 2 DEBUG nova.virt.hardware [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.705 2 DEBUG nova.virt.hardware [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.705 2 DEBUG nova.virt.hardware [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.705 2 DEBUG nova.virt.hardware [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.705 2 DEBUG nova.virt.hardware [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.706 2 DEBUG nova.objects.instance [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 525c99ab-3297-47fa-996f-96840e6855a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.738 2 DEBUG oslo_concurrency.processutils [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/525c99ab-3297-47fa-996f-96840e6855a5/disk.config --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.799 2 DEBUG oslo_concurrency.processutils [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/525c99ab-3297-47fa-996f-96840e6855a5/disk.config --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.801 2 DEBUG oslo_concurrency.lockutils [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "/var/lib/nova/instances/525c99ab-3297-47fa-996f-96840e6855a5/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.802 2 DEBUG oslo_concurrency.lockutils [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "/var/lib/nova/instances/525c99ab-3297-47fa-996f-96840e6855a5/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.804 2 DEBUG oslo_concurrency.lockutils [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "/var/lib/nova/instances/525c99ab-3297-47fa-996f-96840e6855a5/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.807 2 DEBUG nova.virt.libvirt.vif [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T15:43:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1341624973',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1341624973',id=5,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T15:44:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='de23aa1f8b1f466e8bfa712e3140ce54',ramdisk_id='',reservation_id='r-6ayf6mjs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestExecuteActionsViaActuator-836873667',owner_user_name='tempest-TestExecuteActionsViaActuator-836873667-project-admin'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:44:58Z,user_data=None,user_id='4732dfe3d815487f863c441d326f4231',uuid=525c99ab-3297-47fa-996f-96840e6855a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c54b7ae5-4e02-4f53-aa67-fcd3b588177f", "address": "fa:16:3e:9c:c3:97", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "vif_mac": "fa:16:3e:9c:c3:97"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc54b7ae5-4e", "ovs_interfaceid": "c54b7ae5-4e02-4f53-aa67-fcd3b588177f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.808 2 DEBUG nova.network.os_vif_util [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converting VIF {"id": "c54b7ae5-4e02-4f53-aa67-fcd3b588177f", "address": "fa:16:3e:9c:c3:97", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "vif_mac": "fa:16:3e:9c:c3:97"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc54b7ae5-4e", "ovs_interfaceid": "c54b7ae5-4e02-4f53-aa67-fcd3b588177f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.810 2 DEBUG nova.network.os_vif_util [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:c3:97,bridge_name='br-int',has_traffic_filtering=True,id=c54b7ae5-4e02-4f53-aa67-fcd3b588177f,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc54b7ae5-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.815 2 DEBUG nova.virt.libvirt.driver [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] End _get_guest_xml xml=<domain type="kvm">
Oct 13 11:45:04 np0005485008 nova_compute[192512]:  <uuid>525c99ab-3297-47fa-996f-96840e6855a5</uuid>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:  <name>instance-00000005</name>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:  <memory>131072</memory>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:  <vcpu>1</vcpu>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:  <metadata>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <nova:name>tempest-TestExecuteActionsViaActuator-server-1341624973</nova:name>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <nova:creationTime>2025-10-13 15:45:04</nova:creationTime>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <nova:flavor name="m1.nano">
Oct 13 11:45:04 np0005485008 nova_compute[192512]:        <nova:memory>128</nova:memory>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:        <nova:disk>1</nova:disk>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:        <nova:swap>0</nova:swap>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:        <nova:ephemeral>0</nova:ephemeral>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:        <nova:vcpus>1</nova:vcpus>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      </nova:flavor>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <nova:owner>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:        <nova:user uuid="4732dfe3d815487f863c441d326f4231">tempest-TestExecuteActionsViaActuator-836873667-project-admin</nova:user>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:        <nova:project uuid="de23aa1f8b1f466e8bfa712e3140ce54">tempest-TestExecuteActionsViaActuator-836873667</nova:project>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      </nova:owner>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <nova:root type="image" uuid="dcd9fbd3-16ab-46e1-976e-0576b433c9d5"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <nova:ports>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:        <nova:port uuid="c54b7ae5-4e02-4f53-aa67-fcd3b588177f">
Oct 13 11:45:04 np0005485008 nova_compute[192512]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:        </nova:port>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      </nova:ports>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    </nova:instance>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:  </metadata>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:  <sysinfo type="smbios">
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <system>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <entry name="manufacturer">RDO</entry>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <entry name="product">OpenStack Compute</entry>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <entry name="serial">525c99ab-3297-47fa-996f-96840e6855a5</entry>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <entry name="uuid">525c99ab-3297-47fa-996f-96840e6855a5</entry>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <entry name="family">Virtual Machine</entry>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    </system>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:  </sysinfo>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:  <os>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <boot dev="hd"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <smbios mode="sysinfo"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:  </os>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:  <features>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <acpi/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <apic/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <vmcoreinfo/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:  </features>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:  <clock offset="utc">
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <timer name="pit" tickpolicy="delay"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <timer name="hpet" present="no"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:  </clock>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:  <cpu mode="host-model" match="exact">
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <topology sockets="1" cores="1" threads="1"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:  </cpu>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:  <devices>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <disk type="file" device="disk">
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <driver name="qemu" type="qcow2" cache="none"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/525c99ab-3297-47fa-996f-96840e6855a5/disk"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <target dev="vda" bus="virtio"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    </disk>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <disk type="file" device="cdrom">
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <driver name="qemu" type="raw" cache="none"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/525c99ab-3297-47fa-996f-96840e6855a5/disk.config"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <target dev="sda" bus="sata"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    </disk>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <interface type="ethernet">
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <mac address="fa:16:3e:9c:c3:97"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <driver name="vhost" rx_queue_size="512"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <mtu size="1442"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <target dev="tapc54b7ae5-4e"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    </interface>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <serial type="pty">
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <log file="/var/lib/nova/instances/525c99ab-3297-47fa-996f-96840e6855a5/console.log" append="off"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    </serial>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <video>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    </video>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <input type="tablet" bus="usb"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <rng model="virtio">
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <backend model="random">/dev/urandom</backend>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    </rng>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <controller type="usb" index="0"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    <memballoon model="virtio">
Oct 13 11:45:04 np0005485008 nova_compute[192512]:      <stats period="10"/>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:    </memballoon>
Oct 13 11:45:04 np0005485008 nova_compute[192512]:  </devices>
Oct 13 11:45:04 np0005485008 nova_compute[192512]: </domain>
Oct 13 11:45:04 np0005485008 nova_compute[192512]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.817 2 DEBUG nova.virt.libvirt.vif [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T15:43:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1341624973',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1341624973',id=5,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T15:44:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='de23aa1f8b1f466e8bfa712e3140ce54',ramdisk_id='',reservation_id='r-6ayf6mjs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestExecuteActionsViaActuator-836873667',owner_user_name='tempest-TestExecuteActionsViaActuator-836873667-project-admin'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:44:58Z,user_data=None,user_id='4732dfe3d815487f863c441d326f4231',uuid=525c99ab-3297-47fa-996f-96840e6855a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c54b7ae5-4e02-4f53-aa67-fcd3b588177f", "address": "fa:16:3e:9c:c3:97", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "vif_mac": "fa:16:3e:9c:c3:97"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc54b7ae5-4e", "ovs_interfaceid": "c54b7ae5-4e02-4f53-aa67-fcd3b588177f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.818 2 DEBUG nova.network.os_vif_util [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converting VIF {"id": "c54b7ae5-4e02-4f53-aa67-fcd3b588177f", "address": "fa:16:3e:9c:c3:97", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "vif_mac": "fa:16:3e:9c:c3:97"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc54b7ae5-4e", "ovs_interfaceid": "c54b7ae5-4e02-4f53-aa67-fcd3b588177f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.819 2 DEBUG nova.network.os_vif_util [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:c3:97,bridge_name='br-int',has_traffic_filtering=True,id=c54b7ae5-4e02-4f53-aa67-fcd3b588177f,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc54b7ae5-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.819 2 DEBUG os_vif [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:c3:97,bridge_name='br-int',has_traffic_filtering=True,id=c54b7ae5-4e02-4f53-aa67-fcd3b588177f,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc54b7ae5-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.821 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.822 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.827 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc54b7ae5-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.828 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc54b7ae5-4e, col_values=(('external_ids', {'iface-id': 'c54b7ae5-4e02-4f53-aa67-fcd3b588177f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:c3:97', 'vm-uuid': '525c99ab-3297-47fa-996f-96840e6855a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:04 np0005485008 NetworkManager[51587]: <info>  [1760370304.8325] manager: (tapc54b7ae5-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.841 2 INFO os_vif [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:c3:97,bridge_name='br-int',has_traffic_filtering=True,id=c54b7ae5-4e02-4f53-aa67-fcd3b588177f,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc54b7ae5-4e')#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.918 2 DEBUG nova.virt.libvirt.driver [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.919 2 DEBUG nova.virt.libvirt.driver [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.919 2 DEBUG nova.virt.libvirt.driver [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] No VIF found with MAC fa:16:3e:9c:c3:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.920 2 INFO nova.virt.libvirt.driver [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Using config drive#033[00m
Oct 13 11:45:04 np0005485008 kernel: tapc54b7ae5-4e: entered promiscuous mode
Oct 13 11:45:04 np0005485008 NetworkManager[51587]: <info>  [1760370304.9915] manager: (tapc54b7ae5-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/33)
Oct 13 11:45:04 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:04Z|00047|binding|INFO|Claiming lport c54b7ae5-4e02-4f53-aa67-fcd3b588177f for this chassis.
Oct 13 11:45:04 np0005485008 nova_compute[192512]: 2025-10-13 15:45:04.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:04 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:04Z|00048|binding|INFO|c54b7ae5-4e02-4f53-aa67-fcd3b588177f: Claiming fa:16:3e:9c:c3:97 10.100.0.7
Oct 13 11:45:05 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:05.008 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:c3:97 10.100.0.7'], port_security=['fa:16:3e:9c:c3:97 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '525c99ab-3297-47fa-996f-96840e6855a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de23aa1f8b1f466e8bfa712e3140ce54', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2246fb33-460f-432b-aa37-27ea09f6fda6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c73c3f50-1be2-4143-b9e3-a5e32e8515a9, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=c54b7ae5-4e02-4f53-aa67-fcd3b588177f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:45:05 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:05.011 103642 INFO neutron.agent.ovn.metadata.agent [-] Port c54b7ae5-4e02-4f53-aa67-fcd3b588177f in datapath 844f3185-3b42-4e49-9ef5-690ae5e238a0 bound to our chassis#033[00m
Oct 13 11:45:05 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:05.014 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 844f3185-3b42-4e49-9ef5-690ae5e238a0#033[00m
Oct 13 11:45:05 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:05Z|00049|binding|INFO|Setting lport c54b7ae5-4e02-4f53-aa67-fcd3b588177f ovn-installed in OVS
Oct 13 11:45:05 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:05Z|00050|binding|INFO|Setting lport c54b7ae5-4e02-4f53-aa67-fcd3b588177f up in Southbound
Oct 13 11:45:05 np0005485008 nova_compute[192512]: 2025-10-13 15:45:05.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:05 np0005485008 systemd-udevd[215930]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:45:05 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:05.035 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[93bce959-9623-413e-b7ca-d5e9ff5275fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:05 np0005485008 systemd-machined[152551]: New machine qemu-5-instance-00000005.
Oct 13 11:45:05 np0005485008 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Oct 13 11:45:05 np0005485008 NetworkManager[51587]: <info>  [1760370305.0585] device (tapc54b7ae5-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 11:45:05 np0005485008 NetworkManager[51587]: <info>  [1760370305.0593] device (tapc54b7ae5-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 11:45:05 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:05.083 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[a75c5cc4-d7db-4a32-8fe8-204087fc6c6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:05 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:05.088 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[8c00a02b-18ba-4e7a-aea6-73610206919f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:05 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:05.122 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[8b9e91cd-def9-49e8-8544-8b74d1fa2a0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:05 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:05.146 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[4caeea88-ee2d-4f47-b919-a4d5ebc4d430]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap844f3185-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:dc:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 12, 'rx_bytes': 1294, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 12, 'rx_bytes': 1294, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 371615, 'reachable_time': 18091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 215943, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:05 np0005485008 nova_compute[192512]: 2025-10-13 15:45:05.162 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "refresh_cache-7144b3d2-d00d-489a-81a2-11dd796fb608" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:45:05 np0005485008 nova_compute[192512]: 2025-10-13 15:45:05.163 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquired lock "refresh_cache-7144b3d2-d00d-489a-81a2-11dd796fb608" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:45:05 np0005485008 nova_compute[192512]: 2025-10-13 15:45:05.163 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 13 11:45:05 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:05.167 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[146039d5-d51a-482d-a69f-8ffe33d731b7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371629, 'tstamp': 371629}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 215945, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371633, 'tstamp': 371633}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 215945, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:05 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:05.169 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap844f3185-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:05 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:05.172 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap844f3185-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:05 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:05.172 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:45:05 np0005485008 nova_compute[192512]: 2025-10-13 15:45:05.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:05 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:05.173 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap844f3185-30, col_values=(('external_ids', {'iface-id': '94a96fe0-138f-4b15-b38e-a8f08a7e2933'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:05 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:05.173 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:45:05 np0005485008 podman[202884]: time="2025-10-13T15:45:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:45:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:45:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 11:45:05 np0005485008 nova_compute[192512]: 2025-10-13 15:45:05.652 2 DEBUG nova.compute.manager [req-9e5b5749-79fe-4298-80f6-c269025a0215 req-969b36fe-0dac-4405-9256-4f0b8d1453a1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Received event network-vif-plugged-c54b7ae5-4e02-4f53-aa67-fcd3b588177f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:45:05 np0005485008 nova_compute[192512]: 2025-10-13 15:45:05.653 2 DEBUG oslo_concurrency.lockutils [req-9e5b5749-79fe-4298-80f6-c269025a0215 req-969b36fe-0dac-4405-9256-4f0b8d1453a1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "525c99ab-3297-47fa-996f-96840e6855a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:05 np0005485008 nova_compute[192512]: 2025-10-13 15:45:05.653 2 DEBUG oslo_concurrency.lockutils [req-9e5b5749-79fe-4298-80f6-c269025a0215 req-969b36fe-0dac-4405-9256-4f0b8d1453a1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "525c99ab-3297-47fa-996f-96840e6855a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:05 np0005485008 nova_compute[192512]: 2025-10-13 15:45:05.653 2 DEBUG oslo_concurrency.lockutils [req-9e5b5749-79fe-4298-80f6-c269025a0215 req-969b36fe-0dac-4405-9256-4f0b8d1453a1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "525c99ab-3297-47fa-996f-96840e6855a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:05 np0005485008 nova_compute[192512]: 2025-10-13 15:45:05.654 2 DEBUG nova.compute.manager [req-9e5b5749-79fe-4298-80f6-c269025a0215 req-969b36fe-0dac-4405-9256-4f0b8d1453a1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] No waiting events found dispatching network-vif-plugged-c54b7ae5-4e02-4f53-aa67-fcd3b588177f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:45:05 np0005485008 nova_compute[192512]: 2025-10-13 15:45:05.654 2 WARNING nova.compute.manager [req-9e5b5749-79fe-4298-80f6-c269025a0215 req-969b36fe-0dac-4405-9256-4f0b8d1453a1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Received unexpected event network-vif-plugged-c54b7ae5-4e02-4f53-aa67-fcd3b588177f for instance with vm_state active and task_state resize_finish.#033[00m
Oct 13 11:45:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:45:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3448 "" "Go-http-client/1.1"
Oct 13 11:45:05 np0005485008 nova_compute[192512]: 2025-10-13 15:45:05.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:06 np0005485008 nova_compute[192512]: 2025-10-13 15:45:06.353 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370306.3525198, 525c99ab-3297-47fa-996f-96840e6855a5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:45:06 np0005485008 nova_compute[192512]: 2025-10-13 15:45:06.353 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] VM Resumed (Lifecycle Event)#033[00m
Oct 13 11:45:06 np0005485008 nova_compute[192512]: 2025-10-13 15:45:06.355 2 DEBUG nova.compute.manager [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 13 11:45:06 np0005485008 nova_compute[192512]: 2025-10-13 15:45:06.359 2 INFO nova.virt.libvirt.driver [-] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Instance running successfully.#033[00m
Oct 13 11:45:06 np0005485008 virtqemud[192082]: argument unsupported: QEMU guest agent is not configured
Oct 13 11:45:06 np0005485008 nova_compute[192512]: 2025-10-13 15:45:06.365 2 DEBUG nova.virt.libvirt.guest [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct 13 11:45:06 np0005485008 nova_compute[192512]: 2025-10-13 15:45:06.366 2 DEBUG nova.virt.libvirt.driver [None req-6af7cec9-d1e5-40a6-9405-402a030d1258 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Oct 13 11:45:06 np0005485008 nova_compute[192512]: 2025-10-13 15:45:06.386 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:45:06 np0005485008 nova_compute[192512]: 2025-10-13 15:45:06.389 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 11:45:06 np0005485008 nova_compute[192512]: 2025-10-13 15:45:06.417 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Oct 13 11:45:06 np0005485008 nova_compute[192512]: 2025-10-13 15:45:06.418 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370306.352753, 525c99ab-3297-47fa-996f-96840e6855a5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:45:06 np0005485008 nova_compute[192512]: 2025-10-13 15:45:06.418 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] VM Started (Lifecycle Event)#033[00m
Oct 13 11:45:06 np0005485008 nova_compute[192512]: 2025-10-13 15:45:06.442 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:45:06 np0005485008 nova_compute[192512]: 2025-10-13 15:45:06.445 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 11:45:06 np0005485008 nova_compute[192512]: 2025-10-13 15:45:06.528 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Oct 13 11:45:07 np0005485008 nova_compute[192512]: 2025-10-13 15:45:07.280 2 DEBUG nova.network.neutron [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Updating instance_info_cache with network_info: [{"id": "74a8a863-ae9d-45d4-ab72-b3d0e08d02f2", "address": "fa:16:3e:15:93:1a", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74a8a863-ae", "ovs_interfaceid": "74a8a863-ae9d-45d4-ab72-b3d0e08d02f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:45:07 np0005485008 nova_compute[192512]: 2025-10-13 15:45:07.317 2 DEBUG oslo_concurrency.lockutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-d31c495b-2e0f-4da1-bc80-cf4628fd772e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:45:07 np0005485008 nova_compute[192512]: 2025-10-13 15:45:07.336 2 DEBUG oslo_concurrency.lockutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:07 np0005485008 nova_compute[192512]: 2025-10-13 15:45:07.337 2 DEBUG oslo_concurrency.lockutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:07 np0005485008 nova_compute[192512]: 2025-10-13 15:45:07.337 2 DEBUG oslo_concurrency.lockutils [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:07 np0005485008 nova_compute[192512]: 2025-10-13 15:45:07.345 2 INFO nova.virt.libvirt.driver [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Oct 13 11:45:07 np0005485008 virtqemud[192082]: Domain id=4 name='instance-00000003' uuid=d31c495b-2e0f-4da1-bc80-cf4628fd772e is tainted: custom-monitor
Oct 13 11:45:07 np0005485008 nova_compute[192512]: 2025-10-13 15:45:07.583 2 DEBUG nova.network.neutron [req-16fbfa9a-203f-46d9-8091-1d63da87d478 req-f1527191-27dd-4e14-9c27-fa01c9aa093d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Updated VIF entry in instance network info cache for port c54b7ae5-4e02-4f53-aa67-fcd3b588177f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 13 11:45:07 np0005485008 nova_compute[192512]: 2025-10-13 15:45:07.584 2 DEBUG nova.network.neutron [req-16fbfa9a-203f-46d9-8091-1d63da87d478 req-f1527191-27dd-4e14-9c27-fa01c9aa093d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Updating instance_info_cache with network_info: [{"id": "c54b7ae5-4e02-4f53-aa67-fcd3b588177f", "address": "fa:16:3e:9c:c3:97", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc54b7ae5-4e", "ovs_interfaceid": "c54b7ae5-4e02-4f53-aa67-fcd3b588177f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:45:07 np0005485008 nova_compute[192512]: 2025-10-13 15:45:07.604 2 DEBUG oslo_concurrency.lockutils [req-16fbfa9a-203f-46d9-8091-1d63da87d478 req-f1527191-27dd-4e14-9c27-fa01c9aa093d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-525c99ab-3297-47fa-996f-96840e6855a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:45:07 np0005485008 systemd[1]: Stopping User Manager for UID 42436...
Oct 13 11:45:07 np0005485008 systemd[215673]: Activating special unit Exit the Session...
Oct 13 11:45:07 np0005485008 systemd[215673]: Stopped target Main User Target.
Oct 13 11:45:07 np0005485008 systemd[215673]: Stopped target Basic System.
Oct 13 11:45:07 np0005485008 systemd[215673]: Stopped target Paths.
Oct 13 11:45:07 np0005485008 systemd[215673]: Stopped target Sockets.
Oct 13 11:45:07 np0005485008 systemd[215673]: Stopped target Timers.
Oct 13 11:45:07 np0005485008 systemd[215673]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 13 11:45:07 np0005485008 systemd[215673]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 13 11:45:07 np0005485008 systemd[215673]: Closed D-Bus User Message Bus Socket.
Oct 13 11:45:07 np0005485008 systemd[215673]: Stopped Create User's Volatile Files and Directories.
Oct 13 11:45:07 np0005485008 systemd[215673]: Removed slice User Application Slice.
Oct 13 11:45:07 np0005485008 systemd[215673]: Reached target Shutdown.
Oct 13 11:45:07 np0005485008 systemd[215673]: Finished Exit the Session.
Oct 13 11:45:07 np0005485008 systemd[215673]: Reached target Exit the Session.
Oct 13 11:45:07 np0005485008 systemd[1]: user@42436.service: Deactivated successfully.
Oct 13 11:45:07 np0005485008 systemd[1]: Stopped User Manager for UID 42436.
Oct 13 11:45:07 np0005485008 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct 13 11:45:07 np0005485008 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct 13 11:45:07 np0005485008 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct 13 11:45:07 np0005485008 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct 13 11:45:07 np0005485008 systemd[1]: Removed slice User Slice of UID 42436.
Oct 13 11:45:07 np0005485008 nova_compute[192512]: 2025-10-13 15:45:07.777 2 DEBUG nova.compute.manager [req-8b73593c-f807-4101-a499-82e7e8ae5bd7 req-14cbe841-dd3d-49a7-a982-48fbfd535345 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Received event network-vif-plugged-c54b7ae5-4e02-4f53-aa67-fcd3b588177f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:45:07 np0005485008 nova_compute[192512]: 2025-10-13 15:45:07.777 2 DEBUG oslo_concurrency.lockutils [req-8b73593c-f807-4101-a499-82e7e8ae5bd7 req-14cbe841-dd3d-49a7-a982-48fbfd535345 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "525c99ab-3297-47fa-996f-96840e6855a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:07 np0005485008 nova_compute[192512]: 2025-10-13 15:45:07.778 2 DEBUG oslo_concurrency.lockutils [req-8b73593c-f807-4101-a499-82e7e8ae5bd7 req-14cbe841-dd3d-49a7-a982-48fbfd535345 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "525c99ab-3297-47fa-996f-96840e6855a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:07 np0005485008 nova_compute[192512]: 2025-10-13 15:45:07.778 2 DEBUG oslo_concurrency.lockutils [req-8b73593c-f807-4101-a499-82e7e8ae5bd7 req-14cbe841-dd3d-49a7-a982-48fbfd535345 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "525c99ab-3297-47fa-996f-96840e6855a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:07 np0005485008 nova_compute[192512]: 2025-10-13 15:45:07.778 2 DEBUG nova.compute.manager [req-8b73593c-f807-4101-a499-82e7e8ae5bd7 req-14cbe841-dd3d-49a7-a982-48fbfd535345 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] No waiting events found dispatching network-vif-plugged-c54b7ae5-4e02-4f53-aa67-fcd3b588177f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:45:07 np0005485008 nova_compute[192512]: 2025-10-13 15:45:07.778 2 WARNING nova.compute.manager [req-8b73593c-f807-4101-a499-82e7e8ae5bd7 req-14cbe841-dd3d-49a7-a982-48fbfd535345 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Received unexpected event network-vif-plugged-c54b7ae5-4e02-4f53-aa67-fcd3b588177f for instance with vm_state resized and task_state None.#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.356 2 INFO nova.virt.libvirt.driver [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.365 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Updating instance_info_cache with network_info: [{"id": "62269976-0b06-4e64-9439-0ec2ac44f78c", "address": "fa:16:3e:6e:e3:5f", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62269976-0b", "ovs_interfaceid": "62269976-0b06-4e64-9439-0ec2ac44f78c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.386 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Releasing lock "refresh_cache-7144b3d2-d00d-489a-81a2-11dd796fb608" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.386 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.386 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.387 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.388 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.430 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.430 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.431 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.431 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.531 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/525c99ab-3297-47fa-996f-96840e6855a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.595 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/525c99ab-3297-47fa-996f-96840e6855a5/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.596 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/525c99ab-3297-47fa-996f-96840e6855a5/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.662 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/525c99ab-3297-47fa-996f-96840e6855a5/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.670 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.730 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.731 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.789 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.795 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.860 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.861 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.921 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.930 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d31c495b-2e0f-4da1-bc80-cf4628fd772e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.998 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d31c495b-2e0f-4da1-bc80-cf4628fd772e/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:45:08 np0005485008 nova_compute[192512]: 2025-10-13 15:45:08.999 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d31c495b-2e0f-4da1-bc80-cf4628fd772e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.057 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d31c495b-2e0f-4da1-bc80-cf4628fd772e/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.065 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3bef78b8-c7d4-43d6-a28f-39a5b4b4250a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.126 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3bef78b8-c7d4-43d6-a28f-39a5b4b4250a/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.127 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3bef78b8-c7d4-43d6-a28f-39a5b4b4250a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.206 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3bef78b8-c7d4-43d6-a28f-39a5b4b4250a/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.362 2 INFO nova.virt.libvirt.driver [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.368 2 DEBUG nova.compute.manager [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.376 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.377 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5123MB free_disk=73.32582092285156GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.378 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.378 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.390 2 DEBUG nova.objects.instance [None req-923212ff-ebe0-4c93-9f47-41ae4b1f7823 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.447 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Applying migration context for instance 525c99ab-3297-47fa-996f-96840e6855a5 as it has an incoming, in-progress migration 9fdfca36-1ac1-4433-9998-53818a892917. Migration status is finished _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.448 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Migration for instance d31c495b-2e0f-4da1-bc80-cf4628fd772e refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.467 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.467 2 INFO nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Updating resource usage from migration 9fdfca36-1ac1-4433-9998-53818a892917#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.502 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.502 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance 525c99ab-3297-47fa-996f-96840e6855a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.503 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance 7144b3d2-d00d-489a-81a2-11dd796fb608 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.503 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance ab52c277-77ae-4d69-b9c9-74f1c5c5fa92 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.520 2 WARNING nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance d31c495b-2e0f-4da1-bc80-cf4628fd772e is not being actively managed by this compute host but has allocations referencing this compute host: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. Skipping heal of allocation because we do not know what to do.#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.521 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.521 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.768 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:45:09 np0005485008 podman[215988]: 2025-10-13 15:45:09.779925584 +0000 UTC m=+0.075753774 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., release=1755695350, version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.789 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.820 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.820 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.442s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.821 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.821 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.881 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.882 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:45:09 np0005485008 nova_compute[192512]: 2025-10-13 15:45:09.882 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 13 11:45:10 np0005485008 nova_compute[192512]: 2025-10-13 15:45:10.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:12 np0005485008 nova_compute[192512]: 2025-10-13 15:45:12.900 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:45:14 np0005485008 nova_compute[192512]: 2025-10-13 15:45:14.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:15 np0005485008 nova_compute[192512]: 2025-10-13 15:45:15.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:15 np0005485008 nova_compute[192512]: 2025-10-13 15:45:15.745 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:45:15 np0005485008 nova_compute[192512]: 2025-10-13 15:45:15.772 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Triggering sync for uuid ab52c277-77ae-4d69-b9c9-74f1c5c5fa92 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 13 11:45:15 np0005485008 nova_compute[192512]: 2025-10-13 15:45:15.773 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Triggering sync for uuid d31c495b-2e0f-4da1-bc80-cf4628fd772e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 13 11:45:15 np0005485008 nova_compute[192512]: 2025-10-13 15:45:15.773 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Triggering sync for uuid 7144b3d2-d00d-489a-81a2-11dd796fb608 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 13 11:45:15 np0005485008 nova_compute[192512]: 2025-10-13 15:45:15.773 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Triggering sync for uuid 525c99ab-3297-47fa-996f-96840e6855a5 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 13 11:45:15 np0005485008 nova_compute[192512]: 2025-10-13 15:45:15.773 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Triggering sync for uuid 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 13 11:45:15 np0005485008 nova_compute[192512]: 2025-10-13 15:45:15.774 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:15 np0005485008 nova_compute[192512]: 2025-10-13 15:45:15.774 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:15 np0005485008 nova_compute[192512]: 2025-10-13 15:45:15.774 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "d31c495b-2e0f-4da1-bc80-cf4628fd772e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:15 np0005485008 nova_compute[192512]: 2025-10-13 15:45:15.774 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "d31c495b-2e0f-4da1-bc80-cf4628fd772e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:15 np0005485008 nova_compute[192512]: 2025-10-13 15:45:15.775 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "7144b3d2-d00d-489a-81a2-11dd796fb608" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:15 np0005485008 nova_compute[192512]: 2025-10-13 15:45:15.775 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "7144b3d2-d00d-489a-81a2-11dd796fb608" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:15 np0005485008 nova_compute[192512]: 2025-10-13 15:45:15.775 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "525c99ab-3297-47fa-996f-96840e6855a5" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:15 np0005485008 nova_compute[192512]: 2025-10-13 15:45:15.775 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "525c99ab-3297-47fa-996f-96840e6855a5" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:15 np0005485008 nova_compute[192512]: 2025-10-13 15:45:15.776 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:15 np0005485008 nova_compute[192512]: 2025-10-13 15:45:15.776 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:15 np0005485008 nova_compute[192512]: 2025-10-13 15:45:15.822 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:15 np0005485008 nova_compute[192512]: 2025-10-13 15:45:15.823 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "d31c495b-2e0f-4da1-bc80-cf4628fd772e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:15 np0005485008 nova_compute[192512]: 2025-10-13 15:45:15.824 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "525c99ab-3297-47fa-996f-96840e6855a5" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:15 np0005485008 nova_compute[192512]: 2025-10-13 15:45:15.826 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:15 np0005485008 nova_compute[192512]: 2025-10-13 15:45:15.846 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "7144b3d2-d00d-489a-81a2-11dd796fb608" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:17 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:17Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9c:c3:97 10.100.0.7
Oct 13 11:45:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:45:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:45:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:45:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:45:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:45:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:45:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:45:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:45:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:45:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:45:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:45:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:45:19 np0005485008 nova_compute[192512]: 2025-10-13 15:45:19.689 2 DEBUG oslo_concurrency.lockutils [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:19 np0005485008 nova_compute[192512]: 2025-10-13 15:45:19.689 2 DEBUG oslo_concurrency.lockutils [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:19 np0005485008 nova_compute[192512]: 2025-10-13 15:45:19.690 2 DEBUG oslo_concurrency.lockutils [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:19 np0005485008 nova_compute[192512]: 2025-10-13 15:45:19.690 2 DEBUG oslo_concurrency.lockutils [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:19 np0005485008 nova_compute[192512]: 2025-10-13 15:45:19.690 2 DEBUG oslo_concurrency.lockutils [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:19 np0005485008 nova_compute[192512]: 2025-10-13 15:45:19.691 2 INFO nova.compute.manager [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Terminating instance#033[00m
Oct 13 11:45:19 np0005485008 nova_compute[192512]: 2025-10-13 15:45:19.692 2 DEBUG nova.compute.manager [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 13 11:45:19 np0005485008 kernel: tap50a7b2c6-0b (unregistering): left promiscuous mode
Oct 13 11:45:19 np0005485008 NetworkManager[51587]: <info>  [1760370319.7180] device (tap50a7b2c6-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 11:45:19 np0005485008 nova_compute[192512]: 2025-10-13 15:45:19.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:19 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:19Z|00051|binding|INFO|Releasing lport 50a7b2c6-0bc6-4214-b676-2fd1cce8198f from this chassis (sb_readonly=0)
Oct 13 11:45:19 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:19Z|00052|binding|INFO|Setting lport 50a7b2c6-0bc6-4214-b676-2fd1cce8198f down in Southbound
Oct 13 11:45:19 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:19Z|00053|binding|INFO|Removing iface tap50a7b2c6-0b ovn-installed in OVS
Oct 13 11:45:19 np0005485008 nova_compute[192512]: 2025-10-13 15:45:19.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:19.740 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:d5:b5 10.100.0.13'], port_security=['fa:16:3e:d3:d5:b5 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3bef78b8-c7d4-43d6-a28f-39a5b4b4250a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de23aa1f8b1f466e8bfa712e3140ce54', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2246fb33-460f-432b-aa37-27ea09f6fda6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c73c3f50-1be2-4143-b9e3-a5e32e8515a9, chassis=[], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=50a7b2c6-0bc6-4214-b676-2fd1cce8198f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:45:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:19.741 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 50a7b2c6-0bc6-4214-b676-2fd1cce8198f in datapath 844f3185-3b42-4e49-9ef5-690ae5e238a0 unbound from our chassis#033[00m
Oct 13 11:45:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:19.742 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 844f3185-3b42-4e49-9ef5-690ae5e238a0#033[00m
Oct 13 11:45:19 np0005485008 nova_compute[192512]: 2025-10-13 15:45:19.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:19.768 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[f43da915-d4eb-4d33-b0fa-332a8e17292d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:19 np0005485008 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct 13 11:45:19 np0005485008 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Consumed 14.508s CPU time.
Oct 13 11:45:19 np0005485008 systemd-machined[152551]: Machine qemu-3-instance-00000006 terminated.
Oct 13 11:45:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:19.823 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[c04149ba-1ea2-493c-a4d8-0790db39e336]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:19.828 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[1a676e1e-d4f0-4478-8169-e432b1a3fa57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:19.861 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[f56c5f5a-b4d7-44c2-bea3-eb98d6824608]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:19.883 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[bb1ccb8f-125a-4c47-afd6-fd5dc4695377]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap844f3185-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:dc:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 14, 'rx_bytes': 1924, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 14, 'rx_bytes': 1924, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 371615, 'reachable_time': 18091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 216029, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:19.905 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[c94b83b9-3b27-449b-80b2-52becddade4a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371629, 'tstamp': 371629}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 216030, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371633, 'tstamp': 371633}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 216030, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:19.906 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap844f3185-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:19 np0005485008 nova_compute[192512]: 2025-10-13 15:45:19.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:19 np0005485008 nova_compute[192512]: 2025-10-13 15:45:19.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:19.936 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap844f3185-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:19.937 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:45:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:19.937 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap844f3185-30, col_values=(('external_ids', {'iface-id': '94a96fe0-138f-4b15-b38e-a8f08a7e2933'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:19.938 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:45:19 np0005485008 nova_compute[192512]: 2025-10-13 15:45:19.943 2 DEBUG nova.compute.manager [req-fc7c25c8-49a0-43d2-8bca-9dc81f2fa8ea req-35e890a8-7585-4554-ad08-9ade58a1dd5f 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Received event network-vif-unplugged-50a7b2c6-0bc6-4214-b676-2fd1cce8198f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:45:19 np0005485008 kernel: tap50a7b2c6-0b: entered promiscuous mode
Oct 13 11:45:19 np0005485008 NetworkManager[51587]: <info>  [1760370319.9451] manager: (tap50a7b2c6-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Oct 13 11:45:19 np0005485008 nova_compute[192512]: 2025-10-13 15:45:19.944 2 DEBUG oslo_concurrency.lockutils [req-fc7c25c8-49a0-43d2-8bca-9dc81f2fa8ea req-35e890a8-7585-4554-ad08-9ade58a1dd5f 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:19 np0005485008 nova_compute[192512]: 2025-10-13 15:45:19.945 2 DEBUG oslo_concurrency.lockutils [req-fc7c25c8-49a0-43d2-8bca-9dc81f2fa8ea req-35e890a8-7585-4554-ad08-9ade58a1dd5f 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:19 np0005485008 nova_compute[192512]: 2025-10-13 15:45:19.945 2 DEBUG oslo_concurrency.lockutils [req-fc7c25c8-49a0-43d2-8bca-9dc81f2fa8ea req-35e890a8-7585-4554-ad08-9ade58a1dd5f 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:19 np0005485008 nova_compute[192512]: 2025-10-13 15:45:19.946 2 DEBUG nova.compute.manager [req-fc7c25c8-49a0-43d2-8bca-9dc81f2fa8ea req-35e890a8-7585-4554-ad08-9ade58a1dd5f 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] No waiting events found dispatching network-vif-unplugged-50a7b2c6-0bc6-4214-b676-2fd1cce8198f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:45:19 np0005485008 nova_compute[192512]: 2025-10-13 15:45:19.946 2 DEBUG nova.compute.manager [req-fc7c25c8-49a0-43d2-8bca-9dc81f2fa8ea req-35e890a8-7585-4554-ad08-9ade58a1dd5f 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Received event network-vif-unplugged-50a7b2c6-0bc6-4214-b676-2fd1cce8198f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 11:45:19 np0005485008 kernel: tap50a7b2c6-0b (unregistering): left promiscuous mode
Oct 13 11:45:19 np0005485008 nova_compute[192512]: 2025-10-13 15:45:19.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:19 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:19Z|00054|binding|INFO|Claiming lport 50a7b2c6-0bc6-4214-b676-2fd1cce8198f for this chassis.
Oct 13 11:45:19 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:19Z|00055|binding|INFO|50a7b2c6-0bc6-4214-b676-2fd1cce8198f: Claiming fa:16:3e:d3:d5:b5 10.100.0.13
Oct 13 11:45:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:19.960 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:d5:b5 10.100.0.13'], port_security=['fa:16:3e:d3:d5:b5 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3bef78b8-c7d4-43d6-a28f-39a5b4b4250a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de23aa1f8b1f466e8bfa712e3140ce54', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2246fb33-460f-432b-aa37-27ea09f6fda6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c73c3f50-1be2-4143-b9e3-a5e32e8515a9, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=50a7b2c6-0bc6-4214-b676-2fd1cce8198f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:45:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:19.962 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 50a7b2c6-0bc6-4214-b676-2fd1cce8198f in datapath 844f3185-3b42-4e49-9ef5-690ae5e238a0 bound to our chassis#033[00m
Oct 13 11:45:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:19.964 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 844f3185-3b42-4e49-9ef5-690ae5e238a0#033[00m
Oct 13 11:45:19 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:19Z|00056|binding|INFO|Setting lport 50a7b2c6-0bc6-4214-b676-2fd1cce8198f ovn-installed in OVS
Oct 13 11:45:19 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:19Z|00057|binding|INFO|Setting lport 50a7b2c6-0bc6-4214-b676-2fd1cce8198f up in Southbound
Oct 13 11:45:19 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:19Z|00058|binding|INFO|Releasing lport 50a7b2c6-0bc6-4214-b676-2fd1cce8198f from this chassis (sb_readonly=1)
Oct 13 11:45:19 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:19Z|00059|if_status|INFO|Not setting lport 50a7b2c6-0bc6-4214-b676-2fd1cce8198f down as sb is readonly
Oct 13 11:45:19 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:19Z|00060|binding|INFO|Removing iface tap50a7b2c6-0b ovn-installed in OVS
Oct 13 11:45:19 np0005485008 nova_compute[192512]: 2025-10-13 15:45:19.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:19 np0005485008 nova_compute[192512]: 2025-10-13 15:45:19.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:19 np0005485008 nova_compute[192512]: 2025-10-13 15:45:19.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:19 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:19Z|00061|binding|INFO|Releasing lport 50a7b2c6-0bc6-4214-b676-2fd1cce8198f from this chassis (sb_readonly=0)
Oct 13 11:45:19 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:19Z|00062|binding|INFO|Setting lport 50a7b2c6-0bc6-4214-b676-2fd1cce8198f down in Southbound
Oct 13 11:45:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:19.984 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:d5:b5 10.100.0.13'], port_security=['fa:16:3e:d3:d5:b5 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3bef78b8-c7d4-43d6-a28f-39a5b4b4250a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de23aa1f8b1f466e8bfa712e3140ce54', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2246fb33-460f-432b-aa37-27ea09f6fda6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c73c3f50-1be2-4143-b9e3-a5e32e8515a9, chassis=[], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=50a7b2c6-0bc6-4214-b676-2fd1cce8198f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:45:19 np0005485008 nova_compute[192512]: 2025-10-13 15:45:19.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:19.986 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[953ade4e-7f0b-4880-9e89-4a84f88033d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.010 2 INFO nova.virt.libvirt.driver [-] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Instance destroyed successfully.#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.012 2 DEBUG nova.objects.instance [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lazy-loading 'resources' on Instance uuid 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:45:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:20.019 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[edcff112-77a9-480e-951f-1987e5990b1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:20.022 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[cce5cd93-54b3-4cb5-88ce-79074ea90032]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.025 2 DEBUG nova.virt.libvirt.vif [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T15:44:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-808761554',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-808761554',id=6,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T15:44:31Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='de23aa1f8b1f466e8bfa712e3140ce54',ramdisk_id='',reservation_id='r-zrk4mi2i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteActionsViaActuator-836873667',owner_user_name='tempest-TestExecuteActionsViaActuator-836873667-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T15:44:31Z,user_data=None,user_id='4732dfe3d815487f863c441d326f4231',uuid=3bef78b8-c7d4-43d6-a28f-39a5b4b4250a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "50a7b2c6-0bc6-4214-b676-2fd1cce8198f", "address": "fa:16:3e:d3:d5:b5", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a7b2c6-0b", "ovs_interfaceid": "50a7b2c6-0bc6-4214-b676-2fd1cce8198f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.025 2 DEBUG nova.network.os_vif_util [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Converting VIF {"id": "50a7b2c6-0bc6-4214-b676-2fd1cce8198f", "address": "fa:16:3e:d3:d5:b5", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a7b2c6-0b", "ovs_interfaceid": "50a7b2c6-0bc6-4214-b676-2fd1cce8198f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.026 2 DEBUG nova.network.os_vif_util [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:d5:b5,bridge_name='br-int',has_traffic_filtering=True,id=50a7b2c6-0bc6-4214-b676-2fd1cce8198f,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50a7b2c6-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.026 2 DEBUG os_vif [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:d5:b5,bridge_name='br-int',has_traffic_filtering=True,id=50a7b2c6-0bc6-4214-b676-2fd1cce8198f,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50a7b2c6-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.028 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50a7b2c6-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.035 2 INFO os_vif [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:d5:b5,bridge_name='br-int',has_traffic_filtering=True,id=50a7b2c6-0bc6-4214-b676-2fd1cce8198f,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50a7b2c6-0b')#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.036 2 INFO nova.virt.libvirt.driver [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Deleting instance files /var/lib/nova/instances/3bef78b8-c7d4-43d6-a28f-39a5b4b4250a_del#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.036 2 INFO nova.virt.libvirt.driver [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Deletion of /var/lib/nova/instances/3bef78b8-c7d4-43d6-a28f-39a5b4b4250a_del complete#033[00m
Oct 13 11:45:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:20.057 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[cab77c87-09a5-436b-a4ee-a264fbe4b140]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.078 2 DEBUG nova.virt.libvirt.host [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.079 2 INFO nova.virt.libvirt.host [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] UEFI support detected#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.081 2 INFO nova.compute.manager [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Took 0.39 seconds to destroy the instance on the hypervisor.#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.081 2 DEBUG oslo.service.loopingcall [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.081 2 DEBUG nova.compute.manager [-] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.082 2 DEBUG nova.network.neutron [-] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 13 11:45:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:20.085 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[68dd301b-62f9-42d6-bdb1-56186862b9ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap844f3185-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:dc:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 16, 'rx_bytes': 1924, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 16, 'rx_bytes': 1924, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 371615, 'reachable_time': 18091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 216053, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:20.111 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[ec801339-8a51-446b-a748-35635267e6cf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371629, 'tstamp': 371629}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 216054, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371633, 'tstamp': 371633}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 216054, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:20.113 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap844f3185-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:20.118 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap844f3185-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:20.118 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:45:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:20.118 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap844f3185-30, col_values=(('external_ids', {'iface-id': '94a96fe0-138f-4b15-b38e-a8f08a7e2933'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:20.119 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:45:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:20.120 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 50a7b2c6-0bc6-4214-b676-2fd1cce8198f in datapath 844f3185-3b42-4e49-9ef5-690ae5e238a0 unbound from our chassis#033[00m
Oct 13 11:45:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:20.121 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 844f3185-3b42-4e49-9ef5-690ae5e238a0#033[00m
Oct 13 11:45:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:20.146 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d8cc84-4a68-4f93-840d-ceec0abf31b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:20.197 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[8e06436f-5162-415d-8622-810a22993304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:20.202 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[bed57ead-ee3d-480a-9631-328ea1ef2f2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:20.244 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[26c6c771-3995-4241-8122-f37ca1734e38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:20.268 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[ced38434-7be3-4515-b94b-063e35158cc6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap844f3185-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:dc:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 18, 'rx_bytes': 1924, 'tx_bytes': 944, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 18, 'rx_bytes': 1924, 'tx_bytes': 944, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 371615, 'reachable_time': 18091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 216061, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:20.287 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[e73961bc-2820-43d3-a519-dd923973cbd1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371629, 'tstamp': 371629}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 216062, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371633, 'tstamp': 371633}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 216062, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:20.290 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap844f3185-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:20.294 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap844f3185-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:20.294 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:45:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:20.294 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap844f3185-30, col_values=(('external_ids', {'iface-id': '94a96fe0-138f-4b15-b38e-a8f08a7e2933'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:20.295 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.887 2 DEBUG nova.network.neutron [-] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.908 2 INFO nova.compute.manager [-] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Took 0.83 seconds to deallocate network for instance.#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.948 2 DEBUG oslo_concurrency.lockutils [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:20 np0005485008 nova_compute[192512]: 2025-10-13 15:45:20.949 2 DEBUG oslo_concurrency.lockutils [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.087 2 DEBUG nova.compute.provider_tree [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.102 2 DEBUG nova.scheduler.client.report [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.123 2 DEBUG oslo_concurrency.lockutils [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.145 2 INFO nova.scheduler.client.report [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Deleted allocations for instance 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.208 2 DEBUG oslo_concurrency.lockutils [None req-0cb6f806-d7dc-4535-a174-18b614175fa6 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.608 2 DEBUG oslo_concurrency.lockutils [None req-fdc239ee-af68-4a8d-92ec-0ed9a0b31516 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "525c99ab-3297-47fa-996f-96840e6855a5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.609 2 DEBUG oslo_concurrency.lockutils [None req-fdc239ee-af68-4a8d-92ec-0ed9a0b31516 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "525c99ab-3297-47fa-996f-96840e6855a5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.609 2 DEBUG oslo_concurrency.lockutils [None req-fdc239ee-af68-4a8d-92ec-0ed9a0b31516 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "525c99ab-3297-47fa-996f-96840e6855a5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.610 2 DEBUG oslo_concurrency.lockutils [None req-fdc239ee-af68-4a8d-92ec-0ed9a0b31516 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "525c99ab-3297-47fa-996f-96840e6855a5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.610 2 DEBUG oslo_concurrency.lockutils [None req-fdc239ee-af68-4a8d-92ec-0ed9a0b31516 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "525c99ab-3297-47fa-996f-96840e6855a5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.611 2 INFO nova.compute.manager [None req-fdc239ee-af68-4a8d-92ec-0ed9a0b31516 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Terminating instance#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.612 2 DEBUG nova.compute.manager [None req-fdc239ee-af68-4a8d-92ec-0ed9a0b31516 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 13 11:45:21 np0005485008 kernel: tapc54b7ae5-4e (unregistering): left promiscuous mode
Oct 13 11:45:21 np0005485008 NetworkManager[51587]: <info>  [1760370321.6444] device (tapc54b7ae5-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 11:45:21 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:21Z|00063|binding|INFO|Releasing lport c54b7ae5-4e02-4f53-aa67-fcd3b588177f from this chassis (sb_readonly=0)
Oct 13 11:45:21 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:21Z|00064|binding|INFO|Setting lport c54b7ae5-4e02-4f53-aa67-fcd3b588177f down in Southbound
Oct 13 11:45:21 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:21Z|00065|binding|INFO|Removing iface tapc54b7ae5-4e ovn-installed in OVS
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:21.655 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:c3:97 10.100.0.7'], port_security=['fa:16:3e:9c:c3:97 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '525c99ab-3297-47fa-996f-96840e6855a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de23aa1f8b1f466e8bfa712e3140ce54', 'neutron:revision_number': '8', 'neutron:security_group_ids': '2246fb33-460f-432b-aa37-27ea09f6fda6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c73c3f50-1be2-4143-b9e3-a5e32e8515a9, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=c54b7ae5-4e02-4f53-aa67-fcd3b588177f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:45:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:21.656 103642 INFO neutron.agent.ovn.metadata.agent [-] Port c54b7ae5-4e02-4f53-aa67-fcd3b588177f in datapath 844f3185-3b42-4e49-9ef5-690ae5e238a0 unbound from our chassis#033[00m
Oct 13 11:45:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:21.658 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 844f3185-3b42-4e49-9ef5-690ae5e238a0#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:21.679 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[56fac84b-9511-4df8-acac-9f3e6f61e6a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:21.710 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[16a07c09-1193-4fb4-8fd1-9e6abaf422eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:21.714 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[ec4c1b08-2a81-4b4d-9887-f558b3ddd0cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:21 np0005485008 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Oct 13 11:45:21 np0005485008 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 12.492s CPU time.
Oct 13 11:45:21 np0005485008 systemd-machined[152551]: Machine qemu-5-instance-00000005 terminated.
Oct 13 11:45:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:21.746 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[365b55f0-cf89-4882-a601-a7f8f2edcd65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:21.763 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[009a3cbb-5192-45ab-bf8f-936c6ece7567]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap844f3185-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:dc:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 20, 'rx_bytes': 1924, 'tx_bytes': 1028, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 20, 'rx_bytes': 1924, 'tx_bytes': 1028, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 371615, 'reachable_time': 18091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 216072, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:21.782 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[672bec62-8e05-4a19-ab9b-6c47c67a3fdb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371629, 'tstamp': 371629}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 216073, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371633, 'tstamp': 371633}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 216073, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:21.784 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap844f3185-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:21.790 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap844f3185-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:21.791 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:45:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:21.791 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap844f3185-30, col_values=(('external_ids', {'iface-id': '94a96fe0-138f-4b15-b38e-a8f08a7e2933'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:21.791 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:45:21 np0005485008 NetworkManager[51587]: <info>  [1760370321.8357] manager: (tapc54b7ae5-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Oct 13 11:45:21 np0005485008 kernel: tapc54b7ae5-4e: entered promiscuous mode
Oct 13 11:45:21 np0005485008 systemd-udevd[216020]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:45:21 np0005485008 kernel: tapc54b7ae5-4e (unregistering): left promiscuous mode
Oct 13 11:45:21 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:21Z|00066|binding|INFO|Claiming lport c54b7ae5-4e02-4f53-aa67-fcd3b588177f for this chassis.
Oct 13 11:45:21 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:21Z|00067|binding|INFO|c54b7ae5-4e02-4f53-aa67-fcd3b588177f: Claiming fa:16:3e:9c:c3:97 10.100.0.7
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:21.854 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:c3:97 10.100.0.7'], port_security=['fa:16:3e:9c:c3:97 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '525c99ab-3297-47fa-996f-96840e6855a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de23aa1f8b1f466e8bfa712e3140ce54', 'neutron:revision_number': '8', 'neutron:security_group_ids': '2246fb33-460f-432b-aa37-27ea09f6fda6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c73c3f50-1be2-4143-b9e3-a5e32e8515a9, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=c54b7ae5-4e02-4f53-aa67-fcd3b588177f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:45:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:21.856 103642 INFO neutron.agent.ovn.metadata.agent [-] Port c54b7ae5-4e02-4f53-aa67-fcd3b588177f in datapath 844f3185-3b42-4e49-9ef5-690ae5e238a0 bound to our chassis#033[00m
Oct 13 11:45:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:21.858 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 844f3185-3b42-4e49-9ef5-690ae5e238a0#033[00m
Oct 13 11:45:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:21.879 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[50ad2102-5780-4b32-bb49-b579623d24ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:21 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:21Z|00068|binding|INFO|Setting lport c54b7ae5-4e02-4f53-aa67-fcd3b588177f ovn-installed in OVS
Oct 13 11:45:21 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:21Z|00069|binding|INFO|Setting lport c54b7ae5-4e02-4f53-aa67-fcd3b588177f up in Southbound
Oct 13 11:45:21 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:21Z|00070|binding|INFO|Releasing lport c54b7ae5-4e02-4f53-aa67-fcd3b588177f from this chassis (sb_readonly=1)
Oct 13 11:45:21 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:21Z|00071|binding|INFO|Removing iface tapc54b7ae5-4e ovn-installed in OVS
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:21 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:21Z|00072|binding|INFO|Releasing lport c54b7ae5-4e02-4f53-aa67-fcd3b588177f from this chassis (sb_readonly=0)
Oct 13 11:45:21 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:21Z|00073|binding|INFO|Setting lport c54b7ae5-4e02-4f53-aa67-fcd3b588177f down in Southbound
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.896 2 INFO nova.virt.libvirt.driver [-] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Instance destroyed successfully.#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.896 2 DEBUG nova.objects.instance [None req-fdc239ee-af68-4a8d-92ec-0ed9a0b31516 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lazy-loading 'resources' on Instance uuid 525c99ab-3297-47fa-996f-96840e6855a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:45:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:21.899 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:c3:97 10.100.0.7'], port_security=['fa:16:3e:9c:c3:97 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '525c99ab-3297-47fa-996f-96840e6855a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de23aa1f8b1f466e8bfa712e3140ce54', 'neutron:revision_number': '8', 'neutron:security_group_ids': '2246fb33-460f-432b-aa37-27ea09f6fda6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c73c3f50-1be2-4143-b9e3-a5e32e8515a9, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=c54b7ae5-4e02-4f53-aa67-fcd3b588177f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.908 2 DEBUG nova.virt.libvirt.vif [None req-fdc239ee-af68-4a8d-92ec-0ed9a0b31516 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T15:43:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-1341624973',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-1341624973',id=5,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T15:45:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='de23aa1f8b1f466e8bfa712e3140ce54',ramdisk_id='',reservation_id='r-6ayf6mjs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteActionsViaActuator-836873667',owner_user_name='tempest-TestExecuteActionsViaActuator-836873667-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T15:45:13Z,user_data=None,user_id='4732dfe3d815487f863c441d326f4231',uuid=525c99ab-3297-47fa-996f-96840e6855a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c54b7ae5-4e02-4f53-aa67-fcd3b588177f", "address": "fa:16:3e:9c:c3:97", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc54b7ae5-4e", "ovs_interfaceid": "c54b7ae5-4e02-4f53-aa67-fcd3b588177f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.909 2 DEBUG nova.network.os_vif_util [None req-fdc239ee-af68-4a8d-92ec-0ed9a0b31516 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Converting VIF {"id": "c54b7ae5-4e02-4f53-aa67-fcd3b588177f", "address": "fa:16:3e:9c:c3:97", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc54b7ae5-4e", "ovs_interfaceid": "c54b7ae5-4e02-4f53-aa67-fcd3b588177f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.910 2 DEBUG nova.network.os_vif_util [None req-fdc239ee-af68-4a8d-92ec-0ed9a0b31516 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:c3:97,bridge_name='br-int',has_traffic_filtering=True,id=c54b7ae5-4e02-4f53-aa67-fcd3b588177f,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc54b7ae5-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.911 2 DEBUG os_vif [None req-fdc239ee-af68-4a8d-92ec-0ed9a0b31516 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:c3:97,bridge_name='br-int',has_traffic_filtering=True,id=c54b7ae5-4e02-4f53-aa67-fcd3b588177f,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc54b7ae5-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.913 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc54b7ae5-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.920 2 INFO os_vif [None req-fdc239ee-af68-4a8d-92ec-0ed9a0b31516 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:c3:97,bridge_name='br-int',has_traffic_filtering=True,id=c54b7ae5-4e02-4f53-aa67-fcd3b588177f,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc54b7ae5-4e')#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.921 2 INFO nova.virt.libvirt.driver [None req-fdc239ee-af68-4a8d-92ec-0ed9a0b31516 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Deleting instance files /var/lib/nova/instances/525c99ab-3297-47fa-996f-96840e6855a5_del#033[00m
Oct 13 11:45:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:21.921 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[0cf3c11d-d4fb-4b85-9580-fc5ca19362c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:21.928 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[123630a0-1dc4-4282-ad1a-25b73334b979]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.930 2 INFO nova.virt.libvirt.driver [None req-fdc239ee-af68-4a8d-92ec-0ed9a0b31516 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Deletion of /var/lib/nova/instances/525c99ab-3297-47fa-996f-96840e6855a5_del complete#033[00m
Oct 13 11:45:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:21.973 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[6b105a44-f378-4635-9e5d-55b20428e5dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.976 2 INFO nova.compute.manager [None req-fdc239ee-af68-4a8d-92ec-0ed9a0b31516 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Took 0.36 seconds to destroy the instance on the hypervisor.#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.977 2 DEBUG oslo.service.loopingcall [None req-fdc239ee-af68-4a8d-92ec-0ed9a0b31516 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.978 2 DEBUG nova.compute.manager [-] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 13 11:45:21 np0005485008 nova_compute[192512]: 2025-10-13 15:45:21.978 2 DEBUG nova.network.neutron [-] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 13 11:45:22 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:22.002 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[753dd4f6-f553-4fd9-a6a9-165b1a07d540]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap844f3185-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:dc:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 22, 'rx_bytes': 1924, 'tx_bytes': 1112, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 22, 'rx_bytes': 1924, 'tx_bytes': 1112, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 371615, 'reachable_time': 18091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 216092, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:22 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:22.024 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[8791a0df-d41d-44bb-ba8c-7a1452ab0bbb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371629, 'tstamp': 371629}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 216093, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371633, 'tstamp': 371633}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 216093, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:22 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:22.026 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap844f3185-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:22 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:22.029 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap844f3185-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:22 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:22.029 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:45:22 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:22.030 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap844f3185-30, col_values=(('external_ids', {'iface-id': '94a96fe0-138f-4b15-b38e-a8f08a7e2933'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:22 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:22.030 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:45:22 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:22.031 103642 INFO neutron.agent.ovn.metadata.agent [-] Port c54b7ae5-4e02-4f53-aa67-fcd3b588177f in datapath 844f3185-3b42-4e49-9ef5-690ae5e238a0 unbound from our chassis#033[00m
Oct 13 11:45:22 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:22.033 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 844f3185-3b42-4e49-9ef5-690ae5e238a0#033[00m
Oct 13 11:45:22 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:22.055 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[ee8966fe-da30-4ef9-8032-b4d2340a3914]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:22 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:22.092 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[c597e283-81e7-4429-afa5-7a84727b2dc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:22 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:22.097 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[820c70ce-4240-4261-9b19-525be0cf64a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:22 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:22.141 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[3c224303-90e1-4391-85f8-4d790915fb5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.144 2 DEBUG nova.compute.manager [req-38285ac0-4ba6-4f29-a263-939d0f8dfc43 req-78f9f819-a419-4b72-b714-d3b710191e43 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Received event network-vif-plugged-50a7b2c6-0bc6-4214-b676-2fd1cce8198f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.145 2 DEBUG oslo_concurrency.lockutils [req-38285ac0-4ba6-4f29-a263-939d0f8dfc43 req-78f9f819-a419-4b72-b714-d3b710191e43 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.145 2 DEBUG oslo_concurrency.lockutils [req-38285ac0-4ba6-4f29-a263-939d0f8dfc43 req-78f9f819-a419-4b72-b714-d3b710191e43 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.146 2 DEBUG oslo_concurrency.lockutils [req-38285ac0-4ba6-4f29-a263-939d0f8dfc43 req-78f9f819-a419-4b72-b714-d3b710191e43 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.146 2 DEBUG nova.compute.manager [req-38285ac0-4ba6-4f29-a263-939d0f8dfc43 req-78f9f819-a419-4b72-b714-d3b710191e43 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] No waiting events found dispatching network-vif-plugged-50a7b2c6-0bc6-4214-b676-2fd1cce8198f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.147 2 WARNING nova.compute.manager [req-38285ac0-4ba6-4f29-a263-939d0f8dfc43 req-78f9f819-a419-4b72-b714-d3b710191e43 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Received unexpected event network-vif-plugged-50a7b2c6-0bc6-4214-b676-2fd1cce8198f for instance with vm_state deleted and task_state None.#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.147 2 DEBUG nova.compute.manager [req-38285ac0-4ba6-4f29-a263-939d0f8dfc43 req-78f9f819-a419-4b72-b714-d3b710191e43 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Received event network-vif-plugged-50a7b2c6-0bc6-4214-b676-2fd1cce8198f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.148 2 DEBUG oslo_concurrency.lockutils [req-38285ac0-4ba6-4f29-a263-939d0f8dfc43 req-78f9f819-a419-4b72-b714-d3b710191e43 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.148 2 DEBUG oslo_concurrency.lockutils [req-38285ac0-4ba6-4f29-a263-939d0f8dfc43 req-78f9f819-a419-4b72-b714-d3b710191e43 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.149 2 DEBUG oslo_concurrency.lockutils [req-38285ac0-4ba6-4f29-a263-939d0f8dfc43 req-78f9f819-a419-4b72-b714-d3b710191e43 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.149 2 DEBUG nova.compute.manager [req-38285ac0-4ba6-4f29-a263-939d0f8dfc43 req-78f9f819-a419-4b72-b714-d3b710191e43 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] No waiting events found dispatching network-vif-plugged-50a7b2c6-0bc6-4214-b676-2fd1cce8198f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.149 2 WARNING nova.compute.manager [req-38285ac0-4ba6-4f29-a263-939d0f8dfc43 req-78f9f819-a419-4b72-b714-d3b710191e43 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Received unexpected event network-vif-plugged-50a7b2c6-0bc6-4214-b676-2fd1cce8198f for instance with vm_state deleted and task_state None.#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.150 2 DEBUG nova.compute.manager [req-38285ac0-4ba6-4f29-a263-939d0f8dfc43 req-78f9f819-a419-4b72-b714-d3b710191e43 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Received event network-vif-plugged-50a7b2c6-0bc6-4214-b676-2fd1cce8198f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.150 2 DEBUG oslo_concurrency.lockutils [req-38285ac0-4ba6-4f29-a263-939d0f8dfc43 req-78f9f819-a419-4b72-b714-d3b710191e43 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.151 2 DEBUG oslo_concurrency.lockutils [req-38285ac0-4ba6-4f29-a263-939d0f8dfc43 req-78f9f819-a419-4b72-b714-d3b710191e43 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.151 2 DEBUG oslo_concurrency.lockutils [req-38285ac0-4ba6-4f29-a263-939d0f8dfc43 req-78f9f819-a419-4b72-b714-d3b710191e43 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.152 2 DEBUG nova.compute.manager [req-38285ac0-4ba6-4f29-a263-939d0f8dfc43 req-78f9f819-a419-4b72-b714-d3b710191e43 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] No waiting events found dispatching network-vif-plugged-50a7b2c6-0bc6-4214-b676-2fd1cce8198f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.152 2 WARNING nova.compute.manager [req-38285ac0-4ba6-4f29-a263-939d0f8dfc43 req-78f9f819-a419-4b72-b714-d3b710191e43 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Received unexpected event network-vif-plugged-50a7b2c6-0bc6-4214-b676-2fd1cce8198f for instance with vm_state deleted and task_state None.#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.153 2 DEBUG nova.compute.manager [req-38285ac0-4ba6-4f29-a263-939d0f8dfc43 req-78f9f819-a419-4b72-b714-d3b710191e43 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Received event network-vif-deleted-50a7b2c6-0bc6-4214-b676-2fd1cce8198f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.153 2 DEBUG nova.compute.manager [req-38285ac0-4ba6-4f29-a263-939d0f8dfc43 req-78f9f819-a419-4b72-b714-d3b710191e43 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Received event network-vif-plugged-50a7b2c6-0bc6-4214-b676-2fd1cce8198f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.153 2 DEBUG oslo_concurrency.lockutils [req-38285ac0-4ba6-4f29-a263-939d0f8dfc43 req-78f9f819-a419-4b72-b714-d3b710191e43 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.154 2 DEBUG oslo_concurrency.lockutils [req-38285ac0-4ba6-4f29-a263-939d0f8dfc43 req-78f9f819-a419-4b72-b714-d3b710191e43 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.154 2 DEBUG oslo_concurrency.lockutils [req-38285ac0-4ba6-4f29-a263-939d0f8dfc43 req-78f9f819-a419-4b72-b714-d3b710191e43 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3bef78b8-c7d4-43d6-a28f-39a5b4b4250a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.155 2 DEBUG nova.compute.manager [req-38285ac0-4ba6-4f29-a263-939d0f8dfc43 req-78f9f819-a419-4b72-b714-d3b710191e43 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] No waiting events found dispatching network-vif-plugged-50a7b2c6-0bc6-4214-b676-2fd1cce8198f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.155 2 WARNING nova.compute.manager [req-38285ac0-4ba6-4f29-a263-939d0f8dfc43 req-78f9f819-a419-4b72-b714-d3b710191e43 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Received unexpected event network-vif-plugged-50a7b2c6-0bc6-4214-b676-2fd1cce8198f for instance with vm_state deleted and task_state None.#033[00m
Oct 13 11:45:22 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:22.167 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[e4a7d0a9-eede-4a16-a978-8820add125b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap844f3185-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:dc:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 24, 'rx_bytes': 1924, 'tx_bytes': 1196, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 24, 'rx_bytes': 1924, 'tx_bytes': 1196, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 371615, 'reachable_time': 18091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 216099, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:22 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:22.192 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[388423a1-adfb-48d1-81b0-f65576371322]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371629, 'tstamp': 371629}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 216100, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371633, 'tstamp': 371633}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 216100, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:22 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:22.194 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap844f3185-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:22 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:22.198 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap844f3185-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:22 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:22.198 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:45:22 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:22.199 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap844f3185-30, col_values=(('external_ids', {'iface-id': '94a96fe0-138f-4b15-b38e-a8f08a7e2933'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:22 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:22.199 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.363 2 DEBUG nova.compute.manager [req-8bf8a8a2-968d-4377-85c2-939b24f1771a req-3a944a41-df4f-41f1-a610-a723245ac316 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Received event network-vif-unplugged-c54b7ae5-4e02-4f53-aa67-fcd3b588177f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.363 2 DEBUG oslo_concurrency.lockutils [req-8bf8a8a2-968d-4377-85c2-939b24f1771a req-3a944a41-df4f-41f1-a610-a723245ac316 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "525c99ab-3297-47fa-996f-96840e6855a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.363 2 DEBUG oslo_concurrency.lockutils [req-8bf8a8a2-968d-4377-85c2-939b24f1771a req-3a944a41-df4f-41f1-a610-a723245ac316 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "525c99ab-3297-47fa-996f-96840e6855a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.364 2 DEBUG oslo_concurrency.lockutils [req-8bf8a8a2-968d-4377-85c2-939b24f1771a req-3a944a41-df4f-41f1-a610-a723245ac316 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "525c99ab-3297-47fa-996f-96840e6855a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.364 2 DEBUG nova.compute.manager [req-8bf8a8a2-968d-4377-85c2-939b24f1771a req-3a944a41-df4f-41f1-a610-a723245ac316 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] No waiting events found dispatching network-vif-unplugged-c54b7ae5-4e02-4f53-aa67-fcd3b588177f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.364 2 DEBUG nova.compute.manager [req-8bf8a8a2-968d-4377-85c2-939b24f1771a req-3a944a41-df4f-41f1-a610-a723245ac316 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Received event network-vif-unplugged-c54b7ae5-4e02-4f53-aa67-fcd3b588177f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:22 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:22.582 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:45:22 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:22.582 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.655 2 DEBUG nova.network.neutron [-] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.673 2 INFO nova.compute.manager [-] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Took 0.69 seconds to deallocate network for instance.#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.716 2 DEBUG oslo_concurrency.lockutils [None req-fdc239ee-af68-4a8d-92ec-0ed9a0b31516 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.717 2 DEBUG oslo_concurrency.lockutils [None req-fdc239ee-af68-4a8d-92ec-0ed9a0b31516 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.827 2 DEBUG nova.compute.provider_tree [None req-fdc239ee-af68-4a8d-92ec-0ed9a0b31516 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.851 2 DEBUG nova.scheduler.client.report [None req-fdc239ee-af68-4a8d-92ec-0ed9a0b31516 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.897 2 DEBUG oslo_concurrency.lockutils [None req-fdc239ee-af68-4a8d-92ec-0ed9a0b31516 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:22 np0005485008 nova_compute[192512]: 2025-10-13 15:45:22.935 2 INFO nova.scheduler.client.report [None req-fdc239ee-af68-4a8d-92ec-0ed9a0b31516 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Deleted allocations for instance 525c99ab-3297-47fa-996f-96840e6855a5#033[00m
Oct 13 11:45:23 np0005485008 nova_compute[192512]: 2025-10-13 15:45:23.057 2 DEBUG oslo_concurrency.lockutils [None req-fdc239ee-af68-4a8d-92ec-0ed9a0b31516 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "525c99ab-3297-47fa-996f-96840e6855a5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.314 2 DEBUG oslo_concurrency.lockutils [None req-08a40bda-ae15-4c49-ad83-f133ed0fa45d 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "7144b3d2-d00d-489a-81a2-11dd796fb608" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.315 2 DEBUG oslo_concurrency.lockutils [None req-08a40bda-ae15-4c49-ad83-f133ed0fa45d 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "7144b3d2-d00d-489a-81a2-11dd796fb608" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.315 2 DEBUG oslo_concurrency.lockutils [None req-08a40bda-ae15-4c49-ad83-f133ed0fa45d 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "7144b3d2-d00d-489a-81a2-11dd796fb608-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.315 2 DEBUG oslo_concurrency.lockutils [None req-08a40bda-ae15-4c49-ad83-f133ed0fa45d 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "7144b3d2-d00d-489a-81a2-11dd796fb608-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.316 2 DEBUG oslo_concurrency.lockutils [None req-08a40bda-ae15-4c49-ad83-f133ed0fa45d 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "7144b3d2-d00d-489a-81a2-11dd796fb608-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.317 2 INFO nova.compute.manager [None req-08a40bda-ae15-4c49-ad83-f133ed0fa45d 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Terminating instance#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.318 2 DEBUG nova.compute.manager [None req-08a40bda-ae15-4c49-ad83-f133ed0fa45d 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 13 11:45:24 np0005485008 kernel: tap62269976-0b (unregistering): left promiscuous mode
Oct 13 11:45:24 np0005485008 NetworkManager[51587]: <info>  [1760370324.3429] device (tap62269976-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 11:45:24 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:24Z|00074|binding|INFO|Releasing lport 62269976-0b06-4e64-9439-0ec2ac44f78c from this chassis (sb_readonly=0)
Oct 13 11:45:24 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:24Z|00075|binding|INFO|Setting lport 62269976-0b06-4e64-9439-0ec2ac44f78c down in Southbound
Oct 13 11:45:24 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:24Z|00076|binding|INFO|Removing iface tap62269976-0b ovn-installed in OVS
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:24 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:24.357 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:e3:5f 10.100.0.14'], port_security=['fa:16:3e:6e:e3:5f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7144b3d2-d00d-489a-81a2-11dd796fb608', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de23aa1f8b1f466e8bfa712e3140ce54', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2246fb33-460f-432b-aa37-27ea09f6fda6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c73c3f50-1be2-4143-b9e3-a5e32e8515a9, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=62269976-0b06-4e64-9439-0ec2ac44f78c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:45:24 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:24.359 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 62269976-0b06-4e64-9439-0ec2ac44f78c in datapath 844f3185-3b42-4e49-9ef5-690ae5e238a0 unbound from our chassis#033[00m
Oct 13 11:45:24 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:24.361 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 844f3185-3b42-4e49-9ef5-690ae5e238a0#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:24 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:24.377 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[cd512163-3e6a-47f8-ad3f-0f6f227ff3db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:24 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:24.412 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[8a0356d2-0986-4427-a41a-1dd4e74eb120]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:24 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:24.415 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec0ad40-8ca6-4680-b65d-48bc4838049a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:24 np0005485008 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000004.scope: Deactivated successfully.
Oct 13 11:45:24 np0005485008 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000004.scope: Consumed 15.624s CPU time.
Oct 13 11:45:24 np0005485008 systemd-machined[152551]: Machine qemu-2-instance-00000004 terminated.
Oct 13 11:45:24 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:24.453 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[db97e81a-3060-4d01-ba18-5ede430a4701]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:24 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:24.474 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[1608a180-e1c8-4d1d-891e-4c50dcc58adf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap844f3185-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:dc:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 26, 'rx_bytes': 1924, 'tx_bytes': 1280, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 26, 'rx_bytes': 1924, 'tx_bytes': 1280, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 371615, 'reachable_time': 18091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 216112, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.480 2 DEBUG nova.compute.manager [req-c0f3f8e9-a5d6-4375-a867-a8f7fd3573f2 req-9bc8b9d0-ab25-4e50-93d8-7683b4b24e64 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Received event network-vif-plugged-c54b7ae5-4e02-4f53-aa67-fcd3b588177f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.480 2 DEBUG oslo_concurrency.lockutils [req-c0f3f8e9-a5d6-4375-a867-a8f7fd3573f2 req-9bc8b9d0-ab25-4e50-93d8-7683b4b24e64 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "525c99ab-3297-47fa-996f-96840e6855a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.481 2 DEBUG oslo_concurrency.lockutils [req-c0f3f8e9-a5d6-4375-a867-a8f7fd3573f2 req-9bc8b9d0-ab25-4e50-93d8-7683b4b24e64 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "525c99ab-3297-47fa-996f-96840e6855a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.481 2 DEBUG oslo_concurrency.lockutils [req-c0f3f8e9-a5d6-4375-a867-a8f7fd3573f2 req-9bc8b9d0-ab25-4e50-93d8-7683b4b24e64 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "525c99ab-3297-47fa-996f-96840e6855a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.481 2 DEBUG nova.compute.manager [req-c0f3f8e9-a5d6-4375-a867-a8f7fd3573f2 req-9bc8b9d0-ab25-4e50-93d8-7683b4b24e64 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] No waiting events found dispatching network-vif-plugged-c54b7ae5-4e02-4f53-aa67-fcd3b588177f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.482 2 WARNING nova.compute.manager [req-c0f3f8e9-a5d6-4375-a867-a8f7fd3573f2 req-9bc8b9d0-ab25-4e50-93d8-7683b4b24e64 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Received unexpected event network-vif-plugged-c54b7ae5-4e02-4f53-aa67-fcd3b588177f for instance with vm_state deleted and task_state None.#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.482 2 DEBUG nova.compute.manager [req-c0f3f8e9-a5d6-4375-a867-a8f7fd3573f2 req-9bc8b9d0-ab25-4e50-93d8-7683b4b24e64 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Received event network-vif-deleted-c54b7ae5-4e02-4f53-aa67-fcd3b588177f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:45:24 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:24.495 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[7aee1f34-7438-4a41-bf1d-d1137cd50936]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371629, 'tstamp': 371629}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 216113, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371633, 'tstamp': 371633}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 216113, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:24 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:24.497 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap844f3185-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:24 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:24.506 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap844f3185-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:24 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:24.507 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:45:24 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:24.507 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap844f3185-30, col_values=(('external_ids', {'iface-id': '94a96fe0-138f-4b15-b38e-a8f08a7e2933'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:24 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:24.507 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.600 2 INFO nova.virt.libvirt.driver [-] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Instance destroyed successfully.#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.601 2 DEBUG nova.objects.instance [None req-08a40bda-ae15-4c49-ad83-f133ed0fa45d 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lazy-loading 'resources' on Instance uuid 7144b3d2-d00d-489a-81a2-11dd796fb608 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.615 2 DEBUG nova.virt.libvirt.vif [None req-08a40bda-ae15-4c49-ad83-f133ed0fa45d 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T15:43:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-786066552',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-786066552',id=4,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T15:43:48Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='de23aa1f8b1f466e8bfa712e3140ce54',ramdisk_id='',reservation_id='r-ude8s7l8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteActionsViaActuator-836873667',owner_user_name='tempest-TestExecuteActionsViaActuator-836873667-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T15:43:48Z,user_data=None,user_id='4732dfe3d815487f863c441d326f4231',uuid=7144b3d2-d00d-489a-81a2-11dd796fb608,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62269976-0b06-4e64-9439-0ec2ac44f78c", "address": "fa:16:3e:6e:e3:5f", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62269976-0b", "ovs_interfaceid": "62269976-0b06-4e64-9439-0ec2ac44f78c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.616 2 DEBUG nova.network.os_vif_util [None req-08a40bda-ae15-4c49-ad83-f133ed0fa45d 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Converting VIF {"id": "62269976-0b06-4e64-9439-0ec2ac44f78c", "address": "fa:16:3e:6e:e3:5f", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62269976-0b", "ovs_interfaceid": "62269976-0b06-4e64-9439-0ec2ac44f78c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.616 2 DEBUG nova.network.os_vif_util [None req-08a40bda-ae15-4c49-ad83-f133ed0fa45d 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6e:e3:5f,bridge_name='br-int',has_traffic_filtering=True,id=62269976-0b06-4e64-9439-0ec2ac44f78c,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62269976-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.617 2 DEBUG os_vif [None req-08a40bda-ae15-4c49-ad83-f133ed0fa45d 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:e3:5f,bridge_name='br-int',has_traffic_filtering=True,id=62269976-0b06-4e64-9439-0ec2ac44f78c,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62269976-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.618 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62269976-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.623 2 INFO os_vif [None req-08a40bda-ae15-4c49-ad83-f133ed0fa45d 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:e3:5f,bridge_name='br-int',has_traffic_filtering=True,id=62269976-0b06-4e64-9439-0ec2ac44f78c,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62269976-0b')#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.624 2 INFO nova.virt.libvirt.driver [None req-08a40bda-ae15-4c49-ad83-f133ed0fa45d 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Deleting instance files /var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608_del#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.624 2 INFO nova.virt.libvirt.driver [None req-08a40bda-ae15-4c49-ad83-f133ed0fa45d 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Deletion of /var/lib/nova/instances/7144b3d2-d00d-489a-81a2-11dd796fb608_del complete#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.673 2 INFO nova.compute.manager [None req-08a40bda-ae15-4c49-ad83-f133ed0fa45d 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Took 0.35 seconds to destroy the instance on the hypervisor.#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.673 2 DEBUG oslo.service.loopingcall [None req-08a40bda-ae15-4c49-ad83-f133ed0fa45d 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.674 2 DEBUG nova.compute.manager [-] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 13 11:45:24 np0005485008 nova_compute[192512]: 2025-10-13 15:45:24.674 2 DEBUG nova.network.neutron [-] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 13 11:45:25 np0005485008 nova_compute[192512]: 2025-10-13 15:45:25.609 2 DEBUG nova.network.neutron [-] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:45:25 np0005485008 nova_compute[192512]: 2025-10-13 15:45:25.628 2 INFO nova.compute.manager [-] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Took 0.95 seconds to deallocate network for instance.#033[00m
Oct 13 11:45:25 np0005485008 nova_compute[192512]: 2025-10-13 15:45:25.676 2 DEBUG nova.compute.manager [req-2e93e0de-dceb-4d65-93f9-b9c95ad45f99 req-dc27fc73-d699-4844-aca2-2e69cbb2a097 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Received event network-vif-deleted-62269976-0b06-4e64-9439-0ec2ac44f78c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:45:25 np0005485008 nova_compute[192512]: 2025-10-13 15:45:25.679 2 DEBUG oslo_concurrency.lockutils [None req-08a40bda-ae15-4c49-ad83-f133ed0fa45d 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:25 np0005485008 nova_compute[192512]: 2025-10-13 15:45:25.680 2 DEBUG oslo_concurrency.lockutils [None req-08a40bda-ae15-4c49-ad83-f133ed0fa45d 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:25 np0005485008 nova_compute[192512]: 2025-10-13 15:45:25.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:25 np0005485008 nova_compute[192512]: 2025-10-13 15:45:25.764 2 DEBUG nova.compute.provider_tree [None req-08a40bda-ae15-4c49-ad83-f133ed0fa45d 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:45:25 np0005485008 nova_compute[192512]: 2025-10-13 15:45:25.782 2 DEBUG nova.scheduler.client.report [None req-08a40bda-ae15-4c49-ad83-f133ed0fa45d 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:45:25 np0005485008 nova_compute[192512]: 2025-10-13 15:45:25.817 2 DEBUG oslo_concurrency.lockutils [None req-08a40bda-ae15-4c49-ad83-f133ed0fa45d 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:25 np0005485008 nova_compute[192512]: 2025-10-13 15:45:25.855 2 INFO nova.scheduler.client.report [None req-08a40bda-ae15-4c49-ad83-f133ed0fa45d 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Deleted allocations for instance 7144b3d2-d00d-489a-81a2-11dd796fb608#033[00m
Oct 13 11:45:25 np0005485008 nova_compute[192512]: 2025-10-13 15:45:25.963 2 DEBUG oslo_concurrency.lockutils [None req-08a40bda-ae15-4c49-ad83-f133ed0fa45d 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "7144b3d2-d00d-489a-81a2-11dd796fb608" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.587 2 DEBUG nova.compute.manager [req-050bccb9-3004-4afd-9a0c-4512b9a5e378 req-958717d5-207b-4a70-a550-41c6d4c21fa1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Received event network-vif-unplugged-62269976-0b06-4e64-9439-0ec2ac44f78c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.587 2 DEBUG oslo_concurrency.lockutils [req-050bccb9-3004-4afd-9a0c-4512b9a5e378 req-958717d5-207b-4a70-a550-41c6d4c21fa1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "7144b3d2-d00d-489a-81a2-11dd796fb608-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.587 2 DEBUG oslo_concurrency.lockutils [req-050bccb9-3004-4afd-9a0c-4512b9a5e378 req-958717d5-207b-4a70-a550-41c6d4c21fa1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "7144b3d2-d00d-489a-81a2-11dd796fb608-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.588 2 DEBUG oslo_concurrency.lockutils [req-050bccb9-3004-4afd-9a0c-4512b9a5e378 req-958717d5-207b-4a70-a550-41c6d4c21fa1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "7144b3d2-d00d-489a-81a2-11dd796fb608-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.588 2 DEBUG nova.compute.manager [req-050bccb9-3004-4afd-9a0c-4512b9a5e378 req-958717d5-207b-4a70-a550-41c6d4c21fa1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] No waiting events found dispatching network-vif-unplugged-62269976-0b06-4e64-9439-0ec2ac44f78c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.588 2 WARNING nova.compute.manager [req-050bccb9-3004-4afd-9a0c-4512b9a5e378 req-958717d5-207b-4a70-a550-41c6d4c21fa1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Received unexpected event network-vif-unplugged-62269976-0b06-4e64-9439-0ec2ac44f78c for instance with vm_state deleted and task_state None.#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.588 2 DEBUG nova.compute.manager [req-050bccb9-3004-4afd-9a0c-4512b9a5e378 req-958717d5-207b-4a70-a550-41c6d4c21fa1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Received event network-vif-plugged-62269976-0b06-4e64-9439-0ec2ac44f78c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.589 2 DEBUG oslo_concurrency.lockutils [req-050bccb9-3004-4afd-9a0c-4512b9a5e378 req-958717d5-207b-4a70-a550-41c6d4c21fa1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "7144b3d2-d00d-489a-81a2-11dd796fb608-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.589 2 DEBUG oslo_concurrency.lockutils [req-050bccb9-3004-4afd-9a0c-4512b9a5e378 req-958717d5-207b-4a70-a550-41c6d4c21fa1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "7144b3d2-d00d-489a-81a2-11dd796fb608-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.589 2 DEBUG oslo_concurrency.lockutils [req-050bccb9-3004-4afd-9a0c-4512b9a5e378 req-958717d5-207b-4a70-a550-41c6d4c21fa1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "7144b3d2-d00d-489a-81a2-11dd796fb608-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.589 2 DEBUG nova.compute.manager [req-050bccb9-3004-4afd-9a0c-4512b9a5e378 req-958717d5-207b-4a70-a550-41c6d4c21fa1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] No waiting events found dispatching network-vif-plugged-62269976-0b06-4e64-9439-0ec2ac44f78c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.589 2 WARNING nova.compute.manager [req-050bccb9-3004-4afd-9a0c-4512b9a5e378 req-958717d5-207b-4a70-a550-41c6d4c21fa1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Received unexpected event network-vif-plugged-62269976-0b06-4e64-9439-0ec2ac44f78c for instance with vm_state deleted and task_state None.#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.687 2 DEBUG oslo_concurrency.lockutils [None req-b5e50eed-a40d-4623-9a12-bbf13f4b5e07 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "d31c495b-2e0f-4da1-bc80-cf4628fd772e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.689 2 DEBUG oslo_concurrency.lockutils [None req-b5e50eed-a40d-4623-9a12-bbf13f4b5e07 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "d31c495b-2e0f-4da1-bc80-cf4628fd772e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.689 2 DEBUG oslo_concurrency.lockutils [None req-b5e50eed-a40d-4623-9a12-bbf13f4b5e07 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "d31c495b-2e0f-4da1-bc80-cf4628fd772e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.690 2 DEBUG oslo_concurrency.lockutils [None req-b5e50eed-a40d-4623-9a12-bbf13f4b5e07 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "d31c495b-2e0f-4da1-bc80-cf4628fd772e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.690 2 DEBUG oslo_concurrency.lockutils [None req-b5e50eed-a40d-4623-9a12-bbf13f4b5e07 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "d31c495b-2e0f-4da1-bc80-cf4628fd772e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.691 2 INFO nova.compute.manager [None req-b5e50eed-a40d-4623-9a12-bbf13f4b5e07 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Terminating instance#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.692 2 DEBUG nova.compute.manager [None req-b5e50eed-a40d-4623-9a12-bbf13f4b5e07 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 13 11:45:26 np0005485008 kernel: tap74a8a863-ae (unregistering): left promiscuous mode
Oct 13 11:45:26 np0005485008 NetworkManager[51587]: <info>  [1760370326.7182] device (tap74a8a863-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 11:45:26 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:26Z|00077|binding|INFO|Releasing lport 74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 from this chassis (sb_readonly=0)
Oct 13 11:45:26 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:26Z|00078|binding|INFO|Setting lport 74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 down in Southbound
Oct 13 11:45:26 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:26Z|00079|binding|INFO|Removing iface tap74a8a863-ae ovn-installed in OVS
Oct 13 11:45:26 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:26.741 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:93:1a 10.100.0.8'], port_security=['fa:16:3e:15:93:1a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd31c495b-2e0f-4da1-bc80-cf4628fd772e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de23aa1f8b1f466e8bfa712e3140ce54', 'neutron:revision_number': '13', 'neutron:security_group_ids': '2246fb33-460f-432b-aa37-27ea09f6fda6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c73c3f50-1be2-4143-b9e3-a5e32e8515a9, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=74a8a863-ae9d-45d4-ab72-b3d0e08d02f2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:26 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:26.743 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 in datapath 844f3185-3b42-4e49-9ef5-690ae5e238a0 unbound from our chassis#033[00m
Oct 13 11:45:26 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:26.745 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 844f3185-3b42-4e49-9ef5-690ae5e238a0#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:26 np0005485008 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000003.scope: Deactivated successfully.
Oct 13 11:45:26 np0005485008 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000003.scope: Consumed 2.489s CPU time.
Oct 13 11:45:26 np0005485008 systemd-machined[152551]: Machine qemu-4-instance-00000003 terminated.
Oct 13 11:45:26 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:26.766 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[ae7ea43b-7061-4f7d-a20b-ce84de9441af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:26 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:26.803 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[b0acca4e-7bcb-4440-bccf-21926018f68d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:26 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:26.807 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[f205c583-3cab-4977-a219-7b50a4c99db4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:26 np0005485008 podman[216131]: 2025-10-13 15:45:26.827069523 +0000 UTC m=+0.100754220 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true)
Oct 13 11:45:26 np0005485008 podman[216133]: 2025-10-13 15:45:26.83050178 +0000 UTC m=+0.086103775 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 11:45:26 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:26.846 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[8492af3f-d479-4b64-8afb-b4fe510ffaf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:26 np0005485008 podman[216132]: 2025-10-13 15:45:26.849539471 +0000 UTC m=+0.122842666 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 13 11:45:26 np0005485008 podman[216136]: 2025-10-13 15:45:26.863249006 +0000 UTC m=+0.118792250 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 11:45:26 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:26.871 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[d421b313-db85-4312-8dea-f6156c5db8df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap844f3185-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:dc:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 28, 'rx_bytes': 1924, 'tx_bytes': 1364, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 28, 'rx_bytes': 1924, 'tx_bytes': 1364, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 371615, 'reachable_time': 18091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 216223, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:26 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:26.891 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[660f81ab-8213-4d0f-b776-df4fdb917795]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371629, 'tstamp': 371629}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 216227, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371633, 'tstamp': 371633}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 216227, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:26 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:26.893 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap844f3185-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:26 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:26.900 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap844f3185-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:26 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:26.900 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:45:26 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:26.901 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap844f3185-30, col_values=(('external_ids', {'iface-id': '94a96fe0-138f-4b15-b38e-a8f08a7e2933'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:26 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:26.901 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:45:26 np0005485008 kernel: tap74a8a863-ae: entered promiscuous mode
Oct 13 11:45:26 np0005485008 NetworkManager[51587]: <info>  [1760370326.9131] manager: (tap74a8a863-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Oct 13 11:45:26 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:26Z|00080|binding|INFO|Claiming lport 74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 for this chassis.
Oct 13 11:45:26 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:26Z|00081|binding|INFO|74a8a863-ae9d-45d4-ab72-b3d0e08d02f2: Claiming fa:16:3e:15:93:1a 10.100.0.8
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:26 np0005485008 kernel: tap74a8a863-ae (unregistering): left promiscuous mode
Oct 13 11:45:26 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:26.925 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:93:1a 10.100.0.8'], port_security=['fa:16:3e:15:93:1a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd31c495b-2e0f-4da1-bc80-cf4628fd772e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de23aa1f8b1f466e8bfa712e3140ce54', 'neutron:revision_number': '13', 'neutron:security_group_ids': '2246fb33-460f-432b-aa37-27ea09f6fda6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c73c3f50-1be2-4143-b9e3-a5e32e8515a9, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=74a8a863-ae9d-45d4-ab72-b3d0e08d02f2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:45:26 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:26.926 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 in datapath 844f3185-3b42-4e49-9ef5-690ae5e238a0 bound to our chassis#033[00m
Oct 13 11:45:26 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:26.928 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 844f3185-3b42-4e49-9ef5-690ae5e238a0#033[00m
Oct 13 11:45:26 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:26Z|00082|binding|INFO|Setting lport 74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 ovn-installed in OVS
Oct 13 11:45:26 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:26Z|00083|binding|INFO|Setting lport 74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 up in Southbound
Oct 13 11:45:26 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:26Z|00084|binding|INFO|Releasing lport 74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 from this chassis (sb_readonly=1)
Oct 13 11:45:26 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:26Z|00085|binding|INFO|Removing iface tap74a8a863-ae ovn-installed in OVS
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:26 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:26Z|00086|binding|INFO|Releasing lport 74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 from this chassis (sb_readonly=0)
Oct 13 11:45:26 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:26Z|00087|binding|INFO|Setting lport 74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 down in Southbound
Oct 13 11:45:26 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:26.947 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[9c12d22e-aa61-4ad6-a5e5-795c4e8ff0cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:26 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:26.953 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:93:1a 10.100.0.8'], port_security=['fa:16:3e:15:93:1a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd31c495b-2e0f-4da1-bc80-cf4628fd772e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de23aa1f8b1f466e8bfa712e3140ce54', 'neutron:revision_number': '13', 'neutron:security_group_ids': '2246fb33-460f-432b-aa37-27ea09f6fda6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c73c3f50-1be2-4143-b9e3-a5e32e8515a9, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=74a8a863-ae9d-45d4-ab72-b3d0e08d02f2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.967 2 INFO nova.virt.libvirt.driver [-] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Instance destroyed successfully.#033[00m
Oct 13 11:45:26 np0005485008 nova_compute[192512]: 2025-10-13 15:45:26.967 2 DEBUG nova.objects.instance [None req-b5e50eed-a40d-4623-9a12-bbf13f4b5e07 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lazy-loading 'resources' on Instance uuid d31c495b-2e0f-4da1-bc80-cf4628fd772e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:45:26 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:26.971 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[9f2a344a-4f54-42cf-9116-58e6ff26c0c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:26 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:26.973 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[d58042f6-8d5a-467d-b204-75f40b45c68d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:26 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:26.992 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[6aea6d75-aa4b-4374-ae19-38ce170b76a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:27 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:27.008 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[402b3a05-b17e-4784-ba5c-cf9610a5ff9b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap844f3185-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:dc:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 30, 'rx_bytes': 1924, 'tx_bytes': 1448, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 30, 'rx_bytes': 1924, 'tx_bytes': 1448, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 371615, 'reachable_time': 18091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 216244, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:27 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:27.023 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[b2bbb908-6aba-486c-bcf8-3169b931a565]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371629, 'tstamp': 371629}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 216245, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371633, 'tstamp': 371633}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 216245, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:27 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:27.024 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap844f3185-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:27 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:27.031 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap844f3185-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:27 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:27.031 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:45:27 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:27.032 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap844f3185-30, col_values=(('external_ids', {'iface-id': '94a96fe0-138f-4b15-b38e-a8f08a7e2933'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.032 2 DEBUG nova.virt.libvirt.vif [None req-b5e50eed-a40d-4623-9a12-bbf13f4b5e07 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-13T15:43:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-911356362',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-911356362',id=3,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T15:43:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='de23aa1f8b1f466e8bfa712e3140ce54',ramdisk_id='',reservation_id='r-cb2dlqyx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',clean_attempts='1',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteActionsViaActuator-836873667',owner_user_name='tempest-TestExecuteActionsViaActuator-836873667-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T15:45:09Z,user_data=None,user_id='4732dfe3d815487f863c441d326f4231',uuid=d31c495b-2e0f-4da1-bc80-cf4628fd772e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "74a8a863-ae9d-45d4-ab72-b3d0e08d02f2", "address": "fa:16:3e:15:93:1a", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74a8a863-ae", "ovs_interfaceid": "74a8a863-ae9d-45d4-ab72-b3d0e08d02f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 11:45:27 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:27.032 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.032 2 DEBUG nova.network.os_vif_util [None req-b5e50eed-a40d-4623-9a12-bbf13f4b5e07 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Converting VIF {"id": "74a8a863-ae9d-45d4-ab72-b3d0e08d02f2", "address": "fa:16:3e:15:93:1a", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap74a8a863-ae", "ovs_interfaceid": "74a8a863-ae9d-45d4-ab72-b3d0e08d02f2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.033 2 DEBUG nova.network.os_vif_util [None req-b5e50eed-a40d-4623-9a12-bbf13f4b5e07 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:15:93:1a,bridge_name='br-int',has_traffic_filtering=True,id=74a8a863-ae9d-45d4-ab72-b3d0e08d02f2,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74a8a863-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.033 2 DEBUG os_vif [None req-b5e50eed-a40d-4623-9a12-bbf13f4b5e07 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:93:1a,bridge_name='br-int',has_traffic_filtering=True,id=74a8a863-ae9d-45d4-ab72-b3d0e08d02f2,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74a8a863-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 11:45:27 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:27.034 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 in datapath 844f3185-3b42-4e49-9ef5-690ae5e238a0 unbound from our chassis#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.035 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74a8a863-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:27 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:27.035 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 844f3185-3b42-4e49-9ef5-690ae5e238a0#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.039 2 INFO os_vif [None req-b5e50eed-a40d-4623-9a12-bbf13f4b5e07 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:93:1a,bridge_name='br-int',has_traffic_filtering=True,id=74a8a863-ae9d-45d4-ab72-b3d0e08d02f2,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap74a8a863-ae')#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.040 2 INFO nova.virt.libvirt.driver [None req-b5e50eed-a40d-4623-9a12-bbf13f4b5e07 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Deleting instance files /var/lib/nova/instances/d31c495b-2e0f-4da1-bc80-cf4628fd772e_del#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.040 2 INFO nova.virt.libvirt.driver [None req-b5e50eed-a40d-4623-9a12-bbf13f4b5e07 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Deletion of /var/lib/nova/instances/d31c495b-2e0f-4da1-bc80-cf4628fd772e_del complete#033[00m
Oct 13 11:45:27 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:27.051 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[98ad6b1d-6106-4e64-a2be-bdd4e727f822]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:27 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:27.080 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[096ffb08-4286-4d32-b328-3a8eb552e9b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:27 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:27.083 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[485e776f-42d9-4736-a658-584591c40857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.090 2 INFO nova.compute.manager [None req-b5e50eed-a40d-4623-9a12-bbf13f4b5e07 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Took 0.40 seconds to destroy the instance on the hypervisor.#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.091 2 DEBUG oslo.service.loopingcall [None req-b5e50eed-a40d-4623-9a12-bbf13f4b5e07 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.091 2 DEBUG nova.compute.manager [-] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.091 2 DEBUG nova.network.neutron [-] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 13 11:45:27 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:27.118 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[f48eedbc-1546-4de3-92be-dc3e8b0c624e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:27 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:27.140 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[5a772665-fd6a-405d-96dc-a5ddee28c566]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap844f3185-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:dc:07'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 32, 'rx_bytes': 1924, 'tx_bytes': 1532, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 32, 'rx_bytes': 1924, 'tx_bytes': 1532, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 371615, 'reachable_time': 18091, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 216252, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:27 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:27.164 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[b59891a7-c107-4f26-8b83-08a523a2cec0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371629, 'tstamp': 371629}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 216253, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap844f3185-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 371633, 'tstamp': 371633}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 216253, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:27 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:27.167 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap844f3185-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:27 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:27.170 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap844f3185-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:27 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:27.170 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:45:27 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:27.171 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap844f3185-30, col_values=(('external_ids', {'iface-id': '94a96fe0-138f-4b15-b38e-a8f08a7e2933'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:27 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:27.171 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:45:27 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:27.584 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.734 2 DEBUG nova.network.neutron [-] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.758 2 INFO nova.compute.manager [-] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Took 0.67 seconds to deallocate network for instance.#033[00m
Oct 13 11:45:27 np0005485008 podman[216254]: 2025-10-13 15:45:27.770415652 +0000 UTC m=+0.073359648 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.777 2 DEBUG nova.compute.manager [req-c63debf0-82e7-419b-ab14-055b466903d5 req-d303aa55-b2a1-4d72-9d8f-6fc351d6bf6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Received event network-vif-unplugged-74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.778 2 DEBUG oslo_concurrency.lockutils [req-c63debf0-82e7-419b-ab14-055b466903d5 req-d303aa55-b2a1-4d72-9d8f-6fc351d6bf6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "d31c495b-2e0f-4da1-bc80-cf4628fd772e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.778 2 DEBUG oslo_concurrency.lockutils [req-c63debf0-82e7-419b-ab14-055b466903d5 req-d303aa55-b2a1-4d72-9d8f-6fc351d6bf6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "d31c495b-2e0f-4da1-bc80-cf4628fd772e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.779 2 DEBUG oslo_concurrency.lockutils [req-c63debf0-82e7-419b-ab14-055b466903d5 req-d303aa55-b2a1-4d72-9d8f-6fc351d6bf6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "d31c495b-2e0f-4da1-bc80-cf4628fd772e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.779 2 DEBUG nova.compute.manager [req-c63debf0-82e7-419b-ab14-055b466903d5 req-d303aa55-b2a1-4d72-9d8f-6fc351d6bf6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] No waiting events found dispatching network-vif-unplugged-74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.779 2 DEBUG nova.compute.manager [req-c63debf0-82e7-419b-ab14-055b466903d5 req-d303aa55-b2a1-4d72-9d8f-6fc351d6bf6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Received event network-vif-unplugged-74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.779 2 DEBUG nova.compute.manager [req-c63debf0-82e7-419b-ab14-055b466903d5 req-d303aa55-b2a1-4d72-9d8f-6fc351d6bf6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Received event network-vif-plugged-74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.780 2 DEBUG oslo_concurrency.lockutils [req-c63debf0-82e7-419b-ab14-055b466903d5 req-d303aa55-b2a1-4d72-9d8f-6fc351d6bf6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "d31c495b-2e0f-4da1-bc80-cf4628fd772e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.780 2 DEBUG oslo_concurrency.lockutils [req-c63debf0-82e7-419b-ab14-055b466903d5 req-d303aa55-b2a1-4d72-9d8f-6fc351d6bf6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "d31c495b-2e0f-4da1-bc80-cf4628fd772e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.780 2 DEBUG oslo_concurrency.lockutils [req-c63debf0-82e7-419b-ab14-055b466903d5 req-d303aa55-b2a1-4d72-9d8f-6fc351d6bf6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "d31c495b-2e0f-4da1-bc80-cf4628fd772e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.780 2 DEBUG nova.compute.manager [req-c63debf0-82e7-419b-ab14-055b466903d5 req-d303aa55-b2a1-4d72-9d8f-6fc351d6bf6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] No waiting events found dispatching network-vif-plugged-74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.780 2 WARNING nova.compute.manager [req-c63debf0-82e7-419b-ab14-055b466903d5 req-d303aa55-b2a1-4d72-9d8f-6fc351d6bf6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Received unexpected event network-vif-plugged-74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 for instance with vm_state active and task_state deleting.#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.781 2 DEBUG nova.compute.manager [req-c63debf0-82e7-419b-ab14-055b466903d5 req-d303aa55-b2a1-4d72-9d8f-6fc351d6bf6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Received event network-vif-plugged-74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.781 2 DEBUG oslo_concurrency.lockutils [req-c63debf0-82e7-419b-ab14-055b466903d5 req-d303aa55-b2a1-4d72-9d8f-6fc351d6bf6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "d31c495b-2e0f-4da1-bc80-cf4628fd772e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.781 2 DEBUG oslo_concurrency.lockutils [req-c63debf0-82e7-419b-ab14-055b466903d5 req-d303aa55-b2a1-4d72-9d8f-6fc351d6bf6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "d31c495b-2e0f-4da1-bc80-cf4628fd772e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.781 2 DEBUG oslo_concurrency.lockutils [req-c63debf0-82e7-419b-ab14-055b466903d5 req-d303aa55-b2a1-4d72-9d8f-6fc351d6bf6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "d31c495b-2e0f-4da1-bc80-cf4628fd772e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.781 2 DEBUG nova.compute.manager [req-c63debf0-82e7-419b-ab14-055b466903d5 req-d303aa55-b2a1-4d72-9d8f-6fc351d6bf6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] No waiting events found dispatching network-vif-plugged-74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.782 2 WARNING nova.compute.manager [req-c63debf0-82e7-419b-ab14-055b466903d5 req-d303aa55-b2a1-4d72-9d8f-6fc351d6bf6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Received unexpected event network-vif-plugged-74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 for instance with vm_state active and task_state deleting.#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.795 2 DEBUG oslo_concurrency.lockutils [None req-b5e50eed-a40d-4623-9a12-bbf13f4b5e07 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.795 2 DEBUG oslo_concurrency.lockutils [None req-b5e50eed-a40d-4623-9a12-bbf13f4b5e07 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.801 2 DEBUG oslo_concurrency.lockutils [None req-b5e50eed-a40d-4623-9a12-bbf13f4b5e07 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.827 2 INFO nova.scheduler.client.report [None req-b5e50eed-a40d-4623-9a12-bbf13f4b5e07 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Deleted allocations for instance d31c495b-2e0f-4da1-bc80-cf4628fd772e#033[00m
Oct 13 11:45:27 np0005485008 nova_compute[192512]: 2025-10-13 15:45:27.946 2 DEBUG oslo_concurrency.lockutils [None req-b5e50eed-a40d-4623-9a12-bbf13f4b5e07 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "d31c495b-2e0f-4da1-bc80-cf4628fd772e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.258s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:28 np0005485008 nova_compute[192512]: 2025-10-13 15:45:28.711 2 DEBUG nova.compute.manager [req-5cd4f923-20c5-4760-9514-9a889c7cffdf req-ee126f6f-5871-44c5-a3f8-b4641a775fe2 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Received event network-vif-deleted-74a8a863-ae9d-45d4-ab72-b3d0e08d02f2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:45:29 np0005485008 nova_compute[192512]: 2025-10-13 15:45:29.883 2 DEBUG oslo_concurrency.lockutils [None req-c22adea5-c6e1-416f-aa7e-74155c527bf4 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:29 np0005485008 nova_compute[192512]: 2025-10-13 15:45:29.884 2 DEBUG oslo_concurrency.lockutils [None req-c22adea5-c6e1-416f-aa7e-74155c527bf4 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:29 np0005485008 nova_compute[192512]: 2025-10-13 15:45:29.885 2 DEBUG oslo_concurrency.lockutils [None req-c22adea5-c6e1-416f-aa7e-74155c527bf4 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:29 np0005485008 nova_compute[192512]: 2025-10-13 15:45:29.885 2 DEBUG oslo_concurrency.lockutils [None req-c22adea5-c6e1-416f-aa7e-74155c527bf4 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:29 np0005485008 nova_compute[192512]: 2025-10-13 15:45:29.886 2 DEBUG oslo_concurrency.lockutils [None req-c22adea5-c6e1-416f-aa7e-74155c527bf4 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:29 np0005485008 nova_compute[192512]: 2025-10-13 15:45:29.887 2 INFO nova.compute.manager [None req-c22adea5-c6e1-416f-aa7e-74155c527bf4 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Terminating instance#033[00m
Oct 13 11:45:29 np0005485008 nova_compute[192512]: 2025-10-13 15:45:29.888 2 DEBUG nova.compute.manager [None req-c22adea5-c6e1-416f-aa7e-74155c527bf4 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 13 11:45:29 np0005485008 kernel: tap6b295aff-c1 (unregistering): left promiscuous mode
Oct 13 11:45:29 np0005485008 NetworkManager[51587]: <info>  [1760370329.9222] device (tap6b295aff-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 11:45:29 np0005485008 nova_compute[192512]: 2025-10-13 15:45:29.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:29Z|00088|binding|INFO|Releasing lport 6b295aff-c137-4935-b9a4-2b8d088fb4f6 from this chassis (sb_readonly=0)
Oct 13 11:45:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:29Z|00089|binding|INFO|Setting lport 6b295aff-c137-4935-b9a4-2b8d088fb4f6 down in Southbound
Oct 13 11:45:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:45:29Z|00090|binding|INFO|Removing iface tap6b295aff-c1 ovn-installed in OVS
Oct 13 11:45:29 np0005485008 nova_compute[192512]: 2025-10-13 15:45:29.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:29.937 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:93:5c 10.100.0.12'], port_security=['fa:16:3e:61:93:5c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'ab52c277-77ae-4d69-b9c9-74f1c5c5fa92', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de23aa1f8b1f466e8bfa712e3140ce54', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2246fb33-460f-432b-aa37-27ea09f6fda6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c73c3f50-1be2-4143-b9e3-a5e32e8515a9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=6b295aff-c137-4935-b9a4-2b8d088fb4f6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:45:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:29.938 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 6b295aff-c137-4935-b9a4-2b8d088fb4f6 in datapath 844f3185-3b42-4e49-9ef5-690ae5e238a0 unbound from our chassis#033[00m
Oct 13 11:45:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:29.940 103642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 844f3185-3b42-4e49-9ef5-690ae5e238a0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 13 11:45:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:29.940 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[108c39f1-6150-4b94-80ae-787a5ccd34b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:29.941 103642 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0 namespace which is not needed anymore#033[00m
Oct 13 11:45:29 np0005485008 nova_compute[192512]: 2025-10-13 15:45:29.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:30 np0005485008 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Deactivated successfully.
Oct 13 11:45:30 np0005485008 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Consumed 18.397s CPU time.
Oct 13 11:45:30 np0005485008 systemd-machined[152551]: Machine qemu-1-instance-00000002 terminated.
Oct 13 11:45:30 np0005485008 neutron-haproxy-ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0[215060]: [NOTICE]   (215064) : haproxy version is 2.8.14-c23fe91
Oct 13 11:45:30 np0005485008 neutron-haproxy-ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0[215060]: [NOTICE]   (215064) : path to executable is /usr/sbin/haproxy
Oct 13 11:45:30 np0005485008 neutron-haproxy-ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0[215060]: [WARNING]  (215064) : Exiting Master process...
Oct 13 11:45:30 np0005485008 neutron-haproxy-ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0[215060]: [ALERT]    (215064) : Current worker (215066) exited with code 143 (Terminated)
Oct 13 11:45:30 np0005485008 neutron-haproxy-ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0[215060]: [WARNING]  (215064) : All workers exited. Exiting... (0)
Oct 13 11:45:30 np0005485008 systemd[1]: libpod-7ccfea5ab79633100b0b91c0c8c911bb1c1656661489ffe0917eaa4451c4cf5c.scope: Deactivated successfully.
Oct 13 11:45:30 np0005485008 conmon[215060]: conmon 7ccfea5ab79633100b0b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7ccfea5ab79633100b0b91c0c8c911bb1c1656661489ffe0917eaa4451c4cf5c.scope/container/memory.events
Oct 13 11:45:30 np0005485008 podman[216297]: 2025-10-13 15:45:30.107886941 +0000 UTC m=+0.051589443 container died 7ccfea5ab79633100b0b91c0c8c911bb1c1656661489ffe0917eaa4451c4cf5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:45:30 np0005485008 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7ccfea5ab79633100b0b91c0c8c911bb1c1656661489ffe0917eaa4451c4cf5c-userdata-shm.mount: Deactivated successfully.
Oct 13 11:45:30 np0005485008 systemd[1]: var-lib-containers-storage-overlay-cbe390109edb036db966cdf2936d8a28c00d876708616712f75555ec95ed5174-merged.mount: Deactivated successfully.
Oct 13 11:45:30 np0005485008 podman[216297]: 2025-10-13 15:45:30.153337223 +0000 UTC m=+0.097039725 container cleanup 7ccfea5ab79633100b0b91c0c8c911bb1c1656661489ffe0917eaa4451c4cf5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 11:45:30 np0005485008 systemd[1]: libpod-conmon-7ccfea5ab79633100b0b91c0c8c911bb1c1656661489ffe0917eaa4451c4cf5c.scope: Deactivated successfully.
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.171 2 INFO nova.virt.libvirt.driver [-] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Instance destroyed successfully.#033[00m
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.171 2 DEBUG nova.objects.instance [None req-c22adea5-c6e1-416f-aa7e-74155c527bf4 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lazy-loading 'resources' on Instance uuid ab52c277-77ae-4d69-b9c9-74f1c5c5fa92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.193 2 DEBUG nova.virt.libvirt.vif [None req-c22adea5-c6e1-416f-aa7e-74155c527bf4 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T15:42:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteActionsViaActuator-server-235079186',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecuteactionsviaactuator-server-235079186',id=2,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T15:42:55Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='de23aa1f8b1f466e8bfa712e3140ce54',ramdisk_id='',reservation_id='r-eofhtcw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteActionsViaActuator-836873667',owner_user_name='tempest-TestExecuteActionsViaActuator-836873667-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T15:42:55Z,user_data=None,user_id='4732dfe3d815487f863c441d326f4231',uuid=ab52c277-77ae-4d69-b9c9-74f1c5c5fa92,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b295aff-c137-4935-b9a4-2b8d088fb4f6", "address": "fa:16:3e:61:93:5c", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b295aff-c1", "ovs_interfaceid": "6b295aff-c137-4935-b9a4-2b8d088fb4f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.193 2 DEBUG nova.network.os_vif_util [None req-c22adea5-c6e1-416f-aa7e-74155c527bf4 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Converting VIF {"id": "6b295aff-c137-4935-b9a4-2b8d088fb4f6", "address": "fa:16:3e:61:93:5c", "network": {"id": "844f3185-3b42-4e49-9ef5-690ae5e238a0", "bridge": "br-int", "label": "tempest-TestExecuteActionsViaActuator-184582397-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b092784e634d42ed896f599383a5cd38", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b295aff-c1", "ovs_interfaceid": "6b295aff-c137-4935-b9a4-2b8d088fb4f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.194 2 DEBUG nova.network.os_vif_util [None req-c22adea5-c6e1-416f-aa7e-74155c527bf4 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:61:93:5c,bridge_name='br-int',has_traffic_filtering=True,id=6b295aff-c137-4935-b9a4-2b8d088fb4f6,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b295aff-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.194 2 DEBUG os_vif [None req-c22adea5-c6e1-416f-aa7e-74155c527bf4 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:93:5c,bridge_name='br-int',has_traffic_filtering=True,id=6b295aff-c137-4935-b9a4-2b8d088fb4f6,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b295aff-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.196 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b295aff-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.202 2 INFO os_vif [None req-c22adea5-c6e1-416f-aa7e-74155c527bf4 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:93:5c,bridge_name='br-int',has_traffic_filtering=True,id=6b295aff-c137-4935-b9a4-2b8d088fb4f6,network=Network(844f3185-3b42-4e49-9ef5-690ae5e238a0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b295aff-c1')#033[00m
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.203 2 INFO nova.virt.libvirt.driver [None req-c22adea5-c6e1-416f-aa7e-74155c527bf4 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Deleting instance files /var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92_del#033[00m
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.203 2 INFO nova.virt.libvirt.driver [None req-c22adea5-c6e1-416f-aa7e-74155c527bf4 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Deletion of /var/lib/nova/instances/ab52c277-77ae-4d69-b9c9-74f1c5c5fa92_del complete#033[00m
Oct 13 11:45:30 np0005485008 podman[216342]: 2025-10-13 15:45:30.240135979 +0000 UTC m=+0.057882399 container remove 7ccfea5ab79633100b0b91c0c8c911bb1c1656661489ffe0917eaa4451c4cf5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 13 11:45:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:30.246 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[5560f206-e74d-4067-96e0-4cb0e198694a]: (4, ('Mon Oct 13 03:45:30 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0 (7ccfea5ab79633100b0b91c0c8c911bb1c1656661489ffe0917eaa4451c4cf5c)\n7ccfea5ab79633100b0b91c0c8c911bb1c1656661489ffe0917eaa4451c4cf5c\nMon Oct 13 03:45:30 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0 (7ccfea5ab79633100b0b91c0c8c911bb1c1656661489ffe0917eaa4451c4cf5c)\n7ccfea5ab79633100b0b91c0c8c911bb1c1656661489ffe0917eaa4451c4cf5c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:30.248 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[99bc03bd-c031-43a7-b288-05142b690635]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:30.249 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap844f3185-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:30 np0005485008 kernel: tap844f3185-30: left promiscuous mode
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.261 2 INFO nova.compute.manager [None req-c22adea5-c6e1-416f-aa7e-74155c527bf4 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Took 0.37 seconds to destroy the instance on the hypervisor.#033[00m
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.262 2 DEBUG oslo.service.loopingcall [None req-c22adea5-c6e1-416f-aa7e-74155c527bf4 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.262 2 DEBUG nova.compute.manager [-] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.263 2 DEBUG nova.network.neutron [-] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:30.283 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[83a10163-ba41-42ee-9ef6-e26c08d7c729]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:30.313 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[f10ddd58-e7e6-4c60-b4a6-71498d0488e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:30.314 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[f5536a53-41ad-403a-863a-dda048d6830f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:30.331 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[56fcb3c8-8402-40b6-b6d9-9536ad0ad300]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 371608, 'reachable_time': 44412, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 216355, 'error': None, 'target': 'ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:30 np0005485008 systemd[1]: run-netns-ovnmeta\x2d844f3185\x2d3b42\x2d4e49\x2d9ef5\x2d690ae5e238a0.mount: Deactivated successfully.
Oct 13 11:45:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:30.346 103757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-844f3185-3b42-4e49-9ef5-690ae5e238a0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 13 11:45:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:30.347 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[c09ad6c2-a071-4086-98e1-faef1747ff70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.353 2 DEBUG nova.compute.manager [req-25ad5944-643a-419c-8795-c14e92bfe2c3 req-b0fada66-0399-4397-83f1-f743f50fd17a 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Received event network-vif-unplugged-6b295aff-c137-4935-b9a4-2b8d088fb4f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.353 2 DEBUG oslo_concurrency.lockutils [req-25ad5944-643a-419c-8795-c14e92bfe2c3 req-b0fada66-0399-4397-83f1-f743f50fd17a 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.354 2 DEBUG oslo_concurrency.lockutils [req-25ad5944-643a-419c-8795-c14e92bfe2c3 req-b0fada66-0399-4397-83f1-f743f50fd17a 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.354 2 DEBUG oslo_concurrency.lockutils [req-25ad5944-643a-419c-8795-c14e92bfe2c3 req-b0fada66-0399-4397-83f1-f743f50fd17a 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.354 2 DEBUG nova.compute.manager [req-25ad5944-643a-419c-8795-c14e92bfe2c3 req-b0fada66-0399-4397-83f1-f743f50fd17a 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] No waiting events found dispatching network-vif-unplugged-6b295aff-c137-4935-b9a4-2b8d088fb4f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.355 2 DEBUG nova.compute.manager [req-25ad5944-643a-419c-8795-c14e92bfe2c3 req-b0fada66-0399-4397-83f1-f743f50fd17a 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Received event network-vif-unplugged-6b295aff-c137-4935-b9a4-2b8d088fb4f6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 11:45:30 np0005485008 nova_compute[192512]: 2025-10-13 15:45:30.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:31 np0005485008 nova_compute[192512]: 2025-10-13 15:45:31.163 2 DEBUG nova.network.neutron [-] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:45:31 np0005485008 nova_compute[192512]: 2025-10-13 15:45:31.188 2 INFO nova.compute.manager [-] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Took 0.93 seconds to deallocate network for instance.#033[00m
Oct 13 11:45:31 np0005485008 nova_compute[192512]: 2025-10-13 15:45:31.229 2 DEBUG oslo_concurrency.lockutils [None req-c22adea5-c6e1-416f-aa7e-74155c527bf4 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:31 np0005485008 nova_compute[192512]: 2025-10-13 15:45:31.229 2 DEBUG oslo_concurrency.lockutils [None req-c22adea5-c6e1-416f-aa7e-74155c527bf4 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:31 np0005485008 nova_compute[192512]: 2025-10-13 15:45:31.269 2 DEBUG nova.compute.provider_tree [None req-c22adea5-c6e1-416f-aa7e-74155c527bf4 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:45:31 np0005485008 nova_compute[192512]: 2025-10-13 15:45:31.290 2 DEBUG nova.scheduler.client.report [None req-c22adea5-c6e1-416f-aa7e-74155c527bf4 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:45:31 np0005485008 nova_compute[192512]: 2025-10-13 15:45:31.328 2 DEBUG oslo_concurrency.lockutils [None req-c22adea5-c6e1-416f-aa7e-74155c527bf4 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:31 np0005485008 nova_compute[192512]: 2025-10-13 15:45:31.355 2 INFO nova.scheduler.client.report [None req-c22adea5-c6e1-416f-aa7e-74155c527bf4 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Deleted allocations for instance ab52c277-77ae-4d69-b9c9-74f1c5c5fa92#033[00m
Oct 13 11:45:31 np0005485008 nova_compute[192512]: 2025-10-13 15:45:31.459 2 DEBUG oslo_concurrency.lockutils [None req-c22adea5-c6e1-416f-aa7e-74155c527bf4 4732dfe3d815487f863c441d326f4231 de23aa1f8b1f466e8bfa712e3140ce54 - - default default] Lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:32 np0005485008 nova_compute[192512]: 2025-10-13 15:45:32.650 2 DEBUG nova.compute.manager [req-38ed4b4c-d4d9-4f2b-908e-00e70fec1126 req-8eb46775-f096-4edc-b2ba-e6a0d9de8cd0 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Received event network-vif-plugged-6b295aff-c137-4935-b9a4-2b8d088fb4f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:45:32 np0005485008 nova_compute[192512]: 2025-10-13 15:45:32.651 2 DEBUG oslo_concurrency.lockutils [req-38ed4b4c-d4d9-4f2b-908e-00e70fec1126 req-8eb46775-f096-4edc-b2ba-e6a0d9de8cd0 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:32 np0005485008 nova_compute[192512]: 2025-10-13 15:45:32.652 2 DEBUG oslo_concurrency.lockutils [req-38ed4b4c-d4d9-4f2b-908e-00e70fec1126 req-8eb46775-f096-4edc-b2ba-e6a0d9de8cd0 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:32 np0005485008 nova_compute[192512]: 2025-10-13 15:45:32.652 2 DEBUG oslo_concurrency.lockutils [req-38ed4b4c-d4d9-4f2b-908e-00e70fec1126 req-8eb46775-f096-4edc-b2ba-e6a0d9de8cd0 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "ab52c277-77ae-4d69-b9c9-74f1c5c5fa92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:32 np0005485008 nova_compute[192512]: 2025-10-13 15:45:32.652 2 DEBUG nova.compute.manager [req-38ed4b4c-d4d9-4f2b-908e-00e70fec1126 req-8eb46775-f096-4edc-b2ba-e6a0d9de8cd0 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] No waiting events found dispatching network-vif-plugged-6b295aff-c137-4935-b9a4-2b8d088fb4f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:45:32 np0005485008 nova_compute[192512]: 2025-10-13 15:45:32.653 2 WARNING nova.compute.manager [req-38ed4b4c-d4d9-4f2b-908e-00e70fec1126 req-8eb46775-f096-4edc-b2ba-e6a0d9de8cd0 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Received unexpected event network-vif-plugged-6b295aff-c137-4935-b9a4-2b8d088fb4f6 for instance with vm_state deleted and task_state None.#033[00m
Oct 13 11:45:32 np0005485008 nova_compute[192512]: 2025-10-13 15:45:32.653 2 DEBUG nova.compute.manager [req-38ed4b4c-d4d9-4f2b-908e-00e70fec1126 req-8eb46775-f096-4edc-b2ba-e6a0d9de8cd0 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Received event network-vif-deleted-6b295aff-c137-4935-b9a4-2b8d088fb4f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:45:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:33.948 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:45:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:33.949 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:45:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:45:33.949 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:45:35 np0005485008 nova_compute[192512]: 2025-10-13 15:45:35.010 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760370320.0078948, 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:45:35 np0005485008 nova_compute[192512]: 2025-10-13 15:45:35.010 2 INFO nova.compute.manager [-] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] VM Stopped (Lifecycle Event)#033[00m
Oct 13 11:45:35 np0005485008 nova_compute[192512]: 2025-10-13 15:45:35.027 2 DEBUG nova.compute.manager [None req-9e34f5cc-aa4d-4bc8-bb38-36c633fe11c3 - - - - - -] [instance: 3bef78b8-c7d4-43d6-a28f-39a5b4b4250a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:45:35 np0005485008 nova_compute[192512]: 2025-10-13 15:45:35.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:35 np0005485008 podman[202884]: time="2025-10-13T15:45:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:45:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:45:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:45:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:45:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2988 "" "Go-http-client/1.1"
Oct 13 11:45:35 np0005485008 nova_compute[192512]: 2025-10-13 15:45:35.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:36 np0005485008 nova_compute[192512]: 2025-10-13 15:45:36.887 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760370321.8821816, 525c99ab-3297-47fa-996f-96840e6855a5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:45:36 np0005485008 nova_compute[192512]: 2025-10-13 15:45:36.888 2 INFO nova.compute.manager [-] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] VM Stopped (Lifecycle Event)#033[00m
Oct 13 11:45:36 np0005485008 nova_compute[192512]: 2025-10-13 15:45:36.958 2 DEBUG nova.compute.manager [None req-fd25ce04-e833-4ce1-975f-29d0cf4ed631 - - - - - -] [instance: 525c99ab-3297-47fa-996f-96840e6855a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:45:39 np0005485008 nova_compute[192512]: 2025-10-13 15:45:39.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:39 np0005485008 nova_compute[192512]: 2025-10-13 15:45:39.599 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760370324.5980537, 7144b3d2-d00d-489a-81a2-11dd796fb608 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:45:39 np0005485008 nova_compute[192512]: 2025-10-13 15:45:39.600 2 INFO nova.compute.manager [-] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] VM Stopped (Lifecycle Event)#033[00m
Oct 13 11:45:39 np0005485008 nova_compute[192512]: 2025-10-13 15:45:39.617 2 DEBUG nova.compute.manager [None req-7135e5ac-4e3c-4358-a1ed-5cec8b47ee57 - - - - - -] [instance: 7144b3d2-d00d-489a-81a2-11dd796fb608] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:45:40 np0005485008 nova_compute[192512]: 2025-10-13 15:45:40.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:40 np0005485008 nova_compute[192512]: 2025-10-13 15:45:40.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:40 np0005485008 podman[216357]: 2025-10-13 15:45:40.769013818 +0000 UTC m=+0.070784789 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64)
Oct 13 11:45:41 np0005485008 nova_compute[192512]: 2025-10-13 15:45:41.965 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760370326.9643016, d31c495b-2e0f-4da1-bc80-cf4628fd772e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:45:41 np0005485008 nova_compute[192512]: 2025-10-13 15:45:41.966 2 INFO nova.compute.manager [-] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] VM Stopped (Lifecycle Event)#033[00m
Oct 13 11:45:41 np0005485008 nova_compute[192512]: 2025-10-13 15:45:41.995 2 DEBUG nova.compute.manager [None req-2f9ccc2a-0d01-43bf-b846-f0db14ba1a95 - - - - - -] [instance: d31c495b-2e0f-4da1-bc80-cf4628fd772e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:45:45 np0005485008 nova_compute[192512]: 2025-10-13 15:45:45.168 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760370330.1677806, ab52c277-77ae-4d69-b9c9-74f1c5c5fa92 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:45:45 np0005485008 nova_compute[192512]: 2025-10-13 15:45:45.169 2 INFO nova.compute.manager [-] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] VM Stopped (Lifecycle Event)#033[00m
Oct 13 11:45:45 np0005485008 nova_compute[192512]: 2025-10-13 15:45:45.188 2 DEBUG nova.compute.manager [None req-4d5b6c07-08cb-4987-ba75-fbb74a0c598a - - - - - -] [instance: ab52c277-77ae-4d69-b9c9-74f1c5c5fa92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:45:45 np0005485008 nova_compute[192512]: 2025-10-13 15:45:45.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:45 np0005485008 nova_compute[192512]: 2025-10-13 15:45:45.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:45:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:45:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:45:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:45:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:45:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:45:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:45:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:45:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:45:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:45:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:45:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:45:50 np0005485008 nova_compute[192512]: 2025-10-13 15:45:50.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:50 np0005485008 nova_compute[192512]: 2025-10-13 15:45:50.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:55 np0005485008 nova_compute[192512]: 2025-10-13 15:45:55.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:55 np0005485008 nova_compute[192512]: 2025-10-13 15:45:55.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:45:57 np0005485008 podman[216381]: 2025-10-13 15:45:57.769793418 +0000 UTC m=+0.062613416 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 11:45:57 np0005485008 podman[216380]: 2025-10-13 15:45:57.775907987 +0000 UTC m=+0.073345178 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent)
Oct 13 11:45:57 np0005485008 podman[216379]: 2025-10-13 15:45:57.775939748 +0000 UTC m=+0.074292538 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 13 11:45:57 np0005485008 podman[216382]: 2025-10-13 15:45:57.809834621 +0000 UTC m=+0.099871572 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:45:57 np0005485008 podman[216458]: 2025-10-13 15:45:57.877337178 +0000 UTC m=+0.062723239 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:46:00 np0005485008 nova_compute[192512]: 2025-10-13 15:46:00.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:46:00 np0005485008 nova_compute[192512]: 2025-10-13 15:46:00.459 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:46:00 np0005485008 nova_compute[192512]: 2025-10-13 15:46:00.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:46:03 np0005485008 nova_compute[192512]: 2025-10-13 15:46:03.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:46:03 np0005485008 nova_compute[192512]: 2025-10-13 15:46:03.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:46:03 np0005485008 nova_compute[192512]: 2025-10-13 15:46:03.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:46:03 np0005485008 nova_compute[192512]: 2025-10-13 15:46:03.430 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 11:46:05 np0005485008 nova_compute[192512]: 2025-10-13 15:46:05.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:46:05 np0005485008 nova_compute[192512]: 2025-10-13 15:46:05.430 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:46:05 np0005485008 nova_compute[192512]: 2025-10-13 15:46:05.431 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 11:46:05 np0005485008 nova_compute[192512]: 2025-10-13 15:46:05.432 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 11:46:05 np0005485008 nova_compute[192512]: 2025-10-13 15:46:05.464 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 11:46:05 np0005485008 podman[202884]: time="2025-10-13T15:46:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:46:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:46:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:46:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:46:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2993 "" "Go-http-client/1.1"
Oct 13 11:46:05 np0005485008 nova_compute[192512]: 2025-10-13 15:46:05.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:46:06 np0005485008 nova_compute[192512]: 2025-10-13 15:46:06.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:46:06 np0005485008 nova_compute[192512]: 2025-10-13 15:46:06.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:46:06 np0005485008 nova_compute[192512]: 2025-10-13 15:46:06.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:46:06 np0005485008 nova_compute[192512]: 2025-10-13 15:46:06.463 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:46:06 np0005485008 nova_compute[192512]: 2025-10-13 15:46:06.464 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:46:06 np0005485008 nova_compute[192512]: 2025-10-13 15:46:06.464 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:46:06 np0005485008 nova_compute[192512]: 2025-10-13 15:46:06.464 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:46:06 np0005485008 nova_compute[192512]: 2025-10-13 15:46:06.665 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:46:06 np0005485008 nova_compute[192512]: 2025-10-13 15:46:06.666 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5885MB free_disk=73.4699478149414GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:46:06 np0005485008 nova_compute[192512]: 2025-10-13 15:46:06.667 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:46:06 np0005485008 nova_compute[192512]: 2025-10-13 15:46:06.667 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:46:06 np0005485008 nova_compute[192512]: 2025-10-13 15:46:06.803 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:46:06 np0005485008 nova_compute[192512]: 2025-10-13 15:46:06.803 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:46:06 np0005485008 nova_compute[192512]: 2025-10-13 15:46:06.859 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:46:06 np0005485008 nova_compute[192512]: 2025-10-13 15:46:06.893 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:46:06 np0005485008 nova_compute[192512]: 2025-10-13 15:46:06.935 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 11:46:06 np0005485008 nova_compute[192512]: 2025-10-13 15:46:06.935 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:46:07 np0005485008 nova_compute[192512]: 2025-10-13 15:46:07.937 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:46:10 np0005485008 nova_compute[192512]: 2025-10-13 15:46:10.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:46:10 np0005485008 nova_compute[192512]: 2025-10-13 15:46:10.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:46:11 np0005485008 podman[216483]: 2025-10-13 15:46:11.783987366 +0000 UTC m=+0.081163212 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, maintainer=Red Hat, Inc.)
Oct 13 11:46:15 np0005485008 nova_compute[192512]: 2025-10-13 15:46:15.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:46:15 np0005485008 nova_compute[192512]: 2025-10-13 15:46:15.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:46:17 np0005485008 ovn_controller[94758]: 2025-10-13T15:46:17Z|00091|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 13 11:46:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:46:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:46:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:46:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:46:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:46:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:46:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:46:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:46:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:46:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:46:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:46:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:46:20 np0005485008 nova_compute[192512]: 2025-10-13 15:46:20.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:46:20 np0005485008 nova_compute[192512]: 2025-10-13 15:46:20.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:46:25 np0005485008 nova_compute[192512]: 2025-10-13 15:46:25.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:46:25 np0005485008 nova_compute[192512]: 2025-10-13 15:46:25.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:46:28 np0005485008 podman[216512]: 2025-10-13 15:46:28.789447363 +0000 UTC m=+0.067401995 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 11:46:28 np0005485008 podman[216506]: 2025-10-13 15:46:28.789781232 +0000 UTC m=+0.069629833 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 11:46:28 np0005485008 podman[216505]: 2025-10-13 15:46:28.790367391 +0000 UTC m=+0.078255682 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid)
Oct 13 11:46:28 np0005485008 podman[216504]: 2025-10-13 15:46:28.800014931 +0000 UTC m=+0.094601589 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Oct 13 11:46:28 np0005485008 podman[216513]: 2025-10-13 15:46:28.833056847 +0000 UTC m=+0.108012865 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, config_id=ovn_controller)
Oct 13 11:46:30 np0005485008 nova_compute[192512]: 2025-10-13 15:46:30.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:46:30 np0005485008 nova_compute[192512]: 2025-10-13 15:46:30.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:46:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:46:33.950 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:46:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:46:33.950 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:46:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:46:33.950 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:46:35 np0005485008 nova_compute[192512]: 2025-10-13 15:46:35.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:46:35 np0005485008 podman[202884]: time="2025-10-13T15:46:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:46:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:46:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:46:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:46:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2996 "" "Go-http-client/1.1"
Oct 13 11:46:35 np0005485008 nova_compute[192512]: 2025-10-13 15:46:35.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:46:40 np0005485008 nova_compute[192512]: 2025-10-13 15:46:40.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:46:40 np0005485008 nova_compute[192512]: 2025-10-13 15:46:40.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:46:42 np0005485008 podman[216608]: 2025-10-13 15:46:42.75674889 +0000 UTC m=+0.056397342 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, version=9.6, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, release=1755695350, build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 13 11:46:45 np0005485008 nova_compute[192512]: 2025-10-13 15:46:45.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:46:45 np0005485008 nova_compute[192512]: 2025-10-13 15:46:45.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:46:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:46:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:46:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:46:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:46:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:46:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:46:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:46:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:46:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:46:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:46:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:46:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:46:50 np0005485008 nova_compute[192512]: 2025-10-13 15:46:50.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:46:50 np0005485008 nova_compute[192512]: 2025-10-13 15:46:50.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:46:50 np0005485008 nova_compute[192512]: 2025-10-13 15:46:50.869 2 DEBUG oslo_concurrency.lockutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Acquiring lock "459a4cbd-09f2-4799-bf21-3ac25d43c07b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:46:50 np0005485008 nova_compute[192512]: 2025-10-13 15:46:50.870 2 DEBUG oslo_concurrency.lockutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lock "459a4cbd-09f2-4799-bf21-3ac25d43c07b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:46:50 np0005485008 nova_compute[192512]: 2025-10-13 15:46:50.903 2 DEBUG nova.compute.manager [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.044 2 DEBUG oslo_concurrency.lockutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.044 2 DEBUG oslo_concurrency.lockutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.055 2 DEBUG nova.virt.hardware [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.056 2 INFO nova.compute.claims [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.194 2 DEBUG nova.compute.provider_tree [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.215 2 DEBUG nova.scheduler.client.report [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.250 2 DEBUG oslo_concurrency.lockutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.251 2 DEBUG nova.compute.manager [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.305 2 DEBUG nova.compute.manager [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.306 2 DEBUG nova.network.neutron [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.354 2 INFO nova.virt.libvirt.driver [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.421 2 DEBUG nova.compute.manager [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.583 2 DEBUG nova.compute.manager [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.585 2 DEBUG nova.virt.libvirt.driver [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.585 2 INFO nova.virt.libvirt.driver [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Creating image(s)#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.586 2 DEBUG oslo_concurrency.lockutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Acquiring lock "/var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.586 2 DEBUG oslo_concurrency.lockutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lock "/var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.587 2 DEBUG oslo_concurrency.lockutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lock "/var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.601 2 DEBUG oslo_concurrency.processutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.656 2 DEBUG oslo_concurrency.processutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.657 2 DEBUG oslo_concurrency.lockutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.658 2 DEBUG oslo_concurrency.lockutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.668 2 DEBUG oslo_concurrency.processutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.719 2 DEBUG oslo_concurrency.processutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.721 2 DEBUG oslo_concurrency.processutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.756 2 DEBUG oslo_concurrency.processutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b/disk 1073741824" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.757 2 DEBUG oslo_concurrency.lockutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.758 2 DEBUG oslo_concurrency.processutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.813 2 DEBUG oslo_concurrency.processutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.814 2 DEBUG nova.virt.disk.api [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Checking if we can resize image /var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.815 2 DEBUG oslo_concurrency.processutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.877 2 DEBUG oslo_concurrency.processutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.878 2 DEBUG nova.virt.disk.api [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Cannot resize image /var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.878 2 DEBUG nova.objects.instance [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lazy-loading 'migration_context' on Instance uuid 459a4cbd-09f2-4799-bf21-3ac25d43c07b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.905 2 DEBUG nova.virt.libvirt.driver [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.906 2 DEBUG nova.virt.libvirt.driver [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Ensure instance console log exists: /var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.906 2 DEBUG oslo_concurrency.lockutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.907 2 DEBUG oslo_concurrency.lockutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:46:51 np0005485008 nova_compute[192512]: 2025-10-13 15:46:51.907 2 DEBUG oslo_concurrency.lockutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:46:52 np0005485008 nova_compute[192512]: 2025-10-13 15:46:52.796 2 DEBUG nova.network.neutron [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Successfully created port: 70d1ddd3-3cab-4861-85ab-3f675f354de4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 13 11:46:55 np0005485008 nova_compute[192512]: 2025-10-13 15:46:55.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:46:55 np0005485008 nova_compute[192512]: 2025-10-13 15:46:55.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:46:58 np0005485008 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 13 11:46:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:46:58.252 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:46:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:46:58.252 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 11:46:58 np0005485008 nova_compute[192512]: 2025-10-13 15:46:58.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:46:58 np0005485008 nova_compute[192512]: 2025-10-13 15:46:58.365 2 DEBUG nova.network.neutron [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Successfully updated port: 70d1ddd3-3cab-4861-85ab-3f675f354de4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 13 11:46:58 np0005485008 nova_compute[192512]: 2025-10-13 15:46:58.414 2 DEBUG oslo_concurrency.lockutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Acquiring lock "refresh_cache-459a4cbd-09f2-4799-bf21-3ac25d43c07b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:46:58 np0005485008 nova_compute[192512]: 2025-10-13 15:46:58.414 2 DEBUG oslo_concurrency.lockutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Acquired lock "refresh_cache-459a4cbd-09f2-4799-bf21-3ac25d43c07b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:46:58 np0005485008 nova_compute[192512]: 2025-10-13 15:46:58.415 2 DEBUG nova.network.neutron [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 11:46:58 np0005485008 nova_compute[192512]: 2025-10-13 15:46:58.554 2 DEBUG nova.compute.manager [req-d75d9ba3-60ab-4ff8-80d2-9c62e55b12e9 req-aa7934ea-e184-4600-b547-7ae229014828 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Received event network-changed-70d1ddd3-3cab-4861-85ab-3f675f354de4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:46:58 np0005485008 nova_compute[192512]: 2025-10-13 15:46:58.554 2 DEBUG nova.compute.manager [req-d75d9ba3-60ab-4ff8-80d2-9c62e55b12e9 req-aa7934ea-e184-4600-b547-7ae229014828 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Refreshing instance network info cache due to event network-changed-70d1ddd3-3cab-4861-85ab-3f675f354de4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 13 11:46:58 np0005485008 nova_compute[192512]: 2025-10-13 15:46:58.554 2 DEBUG oslo_concurrency.lockutils [req-d75d9ba3-60ab-4ff8-80d2-9c62e55b12e9 req-aa7934ea-e184-4600-b547-7ae229014828 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-459a4cbd-09f2-4799-bf21-3ac25d43c07b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:46:59 np0005485008 nova_compute[192512]: 2025-10-13 15:46:59.279 2 DEBUG nova.network.neutron [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 13 11:46:59 np0005485008 podman[216647]: 2025-10-13 15:46:59.788864083 +0000 UTC m=+0.085075064 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, tcib_managed=true, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 11:46:59 np0005485008 podman[216646]: 2025-10-13 15:46:59.789895355 +0000 UTC m=+0.094536618 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 11:46:59 np0005485008 podman[216648]: 2025-10-13 15:46:59.802676193 +0000 UTC m=+0.088122739 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 11:46:59 np0005485008 podman[216653]: 2025-10-13 15:46:59.82769956 +0000 UTC m=+0.109305247 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 11:46:59 np0005485008 podman[216655]: 2025-10-13 15:46:59.863081928 +0000 UTC m=+0.143266031 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 11:47:00 np0005485008 nova_compute[192512]: 2025-10-13 15:47:00.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:00 np0005485008 nova_compute[192512]: 2025-10-13 15:47:00.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:47:00 np0005485008 nova_compute[192512]: 2025-10-13 15:47:00.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.472 2 DEBUG nova.network.neutron [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Updating instance_info_cache with network_info: [{"id": "70d1ddd3-3cab-4861-85ab-3f675f354de4", "address": "fa:16:3e:34:0c:44", "network": {"id": "76535e3e-566d-493a-b5f6-2e93f3d55b2f", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1507903864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06a436ade88844f0944edc05940f1d7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d1ddd3-3c", "ovs_interfaceid": "70d1ddd3-3cab-4861-85ab-3f675f354de4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.497 2 DEBUG oslo_concurrency.lockutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Releasing lock "refresh_cache-459a4cbd-09f2-4799-bf21-3ac25d43c07b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.498 2 DEBUG nova.compute.manager [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Instance network_info: |[{"id": "70d1ddd3-3cab-4861-85ab-3f675f354de4", "address": "fa:16:3e:34:0c:44", "network": {"id": "76535e3e-566d-493a-b5f6-2e93f3d55b2f", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1507903864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06a436ade88844f0944edc05940f1d7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d1ddd3-3c", "ovs_interfaceid": "70d1ddd3-3cab-4861-85ab-3f675f354de4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.498 2 DEBUG oslo_concurrency.lockutils [req-d75d9ba3-60ab-4ff8-80d2-9c62e55b12e9 req-aa7934ea-e184-4600-b547-7ae229014828 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-459a4cbd-09f2-4799-bf21-3ac25d43c07b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.499 2 DEBUG nova.network.neutron [req-d75d9ba3-60ab-4ff8-80d2-9c62e55b12e9 req-aa7934ea-e184-4600-b547-7ae229014828 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Refreshing network info cache for port 70d1ddd3-3cab-4861-85ab-3f675f354de4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.504 2 DEBUG nova.virt.libvirt.driver [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Start _get_guest_xml network_info=[{"id": "70d1ddd3-3cab-4861-85ab-3f675f354de4", "address": "fa:16:3e:34:0c:44", "network": {"id": "76535e3e-566d-493a-b5f6-2e93f3d55b2f", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1507903864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06a436ade88844f0944edc05940f1d7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d1ddd3-3c", "ovs_interfaceid": "70d1ddd3-3cab-4861-85ab-3f675f354de4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'boot_index': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': 'dcd9fbd3-16ab-46e1-976e-0576b433c9d5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.511 2 WARNING nova.virt.libvirt.driver [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.533 2 DEBUG nova.virt.libvirt.host [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.534 2 DEBUG nova.virt.libvirt.host [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.545 2 DEBUG nova.virt.libvirt.host [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.546 2 DEBUG nova.virt.libvirt.host [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.546 2 DEBUG nova.virt.libvirt.driver [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.547 2 DEBUG nova.virt.hardware [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-13T15:39:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ddff5c88-cac9-460e-8ffb-1e9b9a7c2a59',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.547 2 DEBUG nova.virt.hardware [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.547 2 DEBUG nova.virt.hardware [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.548 2 DEBUG nova.virt.hardware [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.548 2 DEBUG nova.virt.hardware [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.548 2 DEBUG nova.virt.hardware [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.548 2 DEBUG nova.virt.hardware [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.549 2 DEBUG nova.virt.hardware [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.549 2 DEBUG nova.virt.hardware [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.549 2 DEBUG nova.virt.hardware [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.550 2 DEBUG nova.virt.hardware [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.554 2 DEBUG nova.virt.libvirt.vif [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T15:46:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteBasicStrategy-server-186244157',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutebasicstrategy-server-186244157',id=8,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3f962bf147c5440db74b91b536548537',ramdisk_id='',reservation_id='r-izh30cfc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteBasicStrategy-1778094526',owner_user_name='tempest-TestExecuteBasicStrategy-1778094526-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:46:51Z,user_data=None,user_id='1f9376a71e3e4f37b4402b4b1dfb68af',uuid=459a4cbd-09f2-4799-bf21-3ac25d43c07b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "70d1ddd3-3cab-4861-85ab-3f675f354de4", "address": "fa:16:3e:34:0c:44", "network": {"id": "76535e3e-566d-493a-b5f6-2e93f3d55b2f", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1507903864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06a436ade88844f0944edc05940f1d7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d1ddd3-3c", "ovs_interfaceid": "70d1ddd3-3cab-4861-85ab-3f675f354de4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.554 2 DEBUG nova.network.os_vif_util [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Converting VIF {"id": "70d1ddd3-3cab-4861-85ab-3f675f354de4", "address": "fa:16:3e:34:0c:44", "network": {"id": "76535e3e-566d-493a-b5f6-2e93f3d55b2f", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1507903864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06a436ade88844f0944edc05940f1d7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d1ddd3-3c", "ovs_interfaceid": "70d1ddd3-3cab-4861-85ab-3f675f354de4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.555 2 DEBUG nova.network.os_vif_util [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:0c:44,bridge_name='br-int',has_traffic_filtering=True,id=70d1ddd3-3cab-4861-85ab-3f675f354de4,network=Network(76535e3e-566d-493a-b5f6-2e93f3d55b2f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70d1ddd3-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.556 2 DEBUG nova.objects.instance [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lazy-loading 'pci_devices' on Instance uuid 459a4cbd-09f2-4799-bf21-3ac25d43c07b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.588 2 DEBUG nova.virt.libvirt.driver [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] End _get_guest_xml xml=<domain type="kvm">
Oct 13 11:47:01 np0005485008 nova_compute[192512]:  <uuid>459a4cbd-09f2-4799-bf21-3ac25d43c07b</uuid>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:  <name>instance-00000008</name>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:  <memory>131072</memory>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:  <vcpu>1</vcpu>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:  <metadata>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <nova:name>tempest-TestExecuteBasicStrategy-server-186244157</nova:name>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <nova:creationTime>2025-10-13 15:47:01</nova:creationTime>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <nova:flavor name="m1.nano">
Oct 13 11:47:01 np0005485008 nova_compute[192512]:        <nova:memory>128</nova:memory>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:        <nova:disk>1</nova:disk>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:        <nova:swap>0</nova:swap>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:        <nova:ephemeral>0</nova:ephemeral>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:        <nova:vcpus>1</nova:vcpus>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      </nova:flavor>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <nova:owner>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:        <nova:user uuid="1f9376a71e3e4f37b4402b4b1dfb68af">tempest-TestExecuteBasicStrategy-1778094526-project-admin</nova:user>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:        <nova:project uuid="3f962bf147c5440db74b91b536548537">tempest-TestExecuteBasicStrategy-1778094526</nova:project>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      </nova:owner>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <nova:root type="image" uuid="dcd9fbd3-16ab-46e1-976e-0576b433c9d5"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <nova:ports>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:        <nova:port uuid="70d1ddd3-3cab-4861-85ab-3f675f354de4">
Oct 13 11:47:01 np0005485008 nova_compute[192512]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:        </nova:port>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      </nova:ports>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    </nova:instance>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:  </metadata>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:  <sysinfo type="smbios">
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <system>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <entry name="manufacturer">RDO</entry>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <entry name="product">OpenStack Compute</entry>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <entry name="serial">459a4cbd-09f2-4799-bf21-3ac25d43c07b</entry>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <entry name="uuid">459a4cbd-09f2-4799-bf21-3ac25d43c07b</entry>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <entry name="family">Virtual Machine</entry>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    </system>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:  </sysinfo>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:  <os>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <boot dev="hd"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <smbios mode="sysinfo"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:  </os>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:  <features>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <acpi/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <apic/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <vmcoreinfo/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:  </features>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:  <clock offset="utc">
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <timer name="pit" tickpolicy="delay"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <timer name="hpet" present="no"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:  </clock>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:  <cpu mode="host-model" match="exact">
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <topology sockets="1" cores="1" threads="1"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:  </cpu>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:  <devices>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <disk type="file" device="disk">
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <driver name="qemu" type="qcow2" cache="none"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b/disk"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <target dev="vda" bus="virtio"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    </disk>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <disk type="file" device="cdrom">
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <driver name="qemu" type="raw" cache="none"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b/disk.config"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <target dev="sda" bus="sata"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    </disk>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <interface type="ethernet">
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <mac address="fa:16:3e:34:0c:44"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <driver name="vhost" rx_queue_size="512"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <mtu size="1442"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <target dev="tap70d1ddd3-3c"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    </interface>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <serial type="pty">
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <log file="/var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b/console.log" append="off"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    </serial>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <video>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    </video>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <input type="tablet" bus="usb"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <rng model="virtio">
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <backend model="random">/dev/urandom</backend>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    </rng>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <controller type="usb" index="0"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    <memballoon model="virtio">
Oct 13 11:47:01 np0005485008 nova_compute[192512]:      <stats period="10"/>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:    </memballoon>
Oct 13 11:47:01 np0005485008 nova_compute[192512]:  </devices>
Oct 13 11:47:01 np0005485008 nova_compute[192512]: </domain>
Oct 13 11:47:01 np0005485008 nova_compute[192512]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.591 2 DEBUG nova.compute.manager [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Preparing to wait for external event network-vif-plugged-70d1ddd3-3cab-4861-85ab-3f675f354de4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.591 2 DEBUG oslo_concurrency.lockutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Acquiring lock "459a4cbd-09f2-4799-bf21-3ac25d43c07b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.592 2 DEBUG oslo_concurrency.lockutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lock "459a4cbd-09f2-4799-bf21-3ac25d43c07b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.593 2 DEBUG oslo_concurrency.lockutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lock "459a4cbd-09f2-4799-bf21-3ac25d43c07b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.594 2 DEBUG nova.virt.libvirt.vif [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T15:46:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteBasicStrategy-server-186244157',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutebasicstrategy-server-186244157',id=8,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3f962bf147c5440db74b91b536548537',ramdisk_id='',reservation_id='r-izh30cfc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteBasicStrategy-1778094526',owner_user_name='tempest-TestExecuteBasicStrategy-1778094526-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:46:51Z,user_data=None,user_id='1f9376a71e3e4f37b4402b4b1dfb68af',uuid=459a4cbd-09f2-4799-bf21-3ac25d43c07b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "70d1ddd3-3cab-4861-85ab-3f675f354de4", "address": "fa:16:3e:34:0c:44", "network": {"id": "76535e3e-566d-493a-b5f6-2e93f3d55b2f", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1507903864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06a436ade88844f0944edc05940f1d7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d1ddd3-3c", "ovs_interfaceid": "70d1ddd3-3cab-4861-85ab-3f675f354de4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.595 2 DEBUG nova.network.os_vif_util [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Converting VIF {"id": "70d1ddd3-3cab-4861-85ab-3f675f354de4", "address": "fa:16:3e:34:0c:44", "network": {"id": "76535e3e-566d-493a-b5f6-2e93f3d55b2f", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1507903864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06a436ade88844f0944edc05940f1d7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d1ddd3-3c", "ovs_interfaceid": "70d1ddd3-3cab-4861-85ab-3f675f354de4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.596 2 DEBUG nova.network.os_vif_util [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:0c:44,bridge_name='br-int',has_traffic_filtering=True,id=70d1ddd3-3cab-4861-85ab-3f675f354de4,network=Network(76535e3e-566d-493a-b5f6-2e93f3d55b2f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70d1ddd3-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.597 2 DEBUG os_vif [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:0c:44,bridge_name='br-int',has_traffic_filtering=True,id=70d1ddd3-3cab-4861-85ab-3f675f354de4,network=Network(76535e3e-566d-493a-b5f6-2e93f3d55b2f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70d1ddd3-3c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.599 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.600 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.605 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap70d1ddd3-3c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.605 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap70d1ddd3-3c, col_values=(('external_ids', {'iface-id': '70d1ddd3-3cab-4861-85ab-3f675f354de4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:34:0c:44', 'vm-uuid': '459a4cbd-09f2-4799-bf21-3ac25d43c07b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:01 np0005485008 NetworkManager[51587]: <info>  [1760370421.6110] manager: (tap70d1ddd3-3c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.623 2 INFO os_vif [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:0c:44,bridge_name='br-int',has_traffic_filtering=True,id=70d1ddd3-3cab-4861-85ab-3f675f354de4,network=Network(76535e3e-566d-493a-b5f6-2e93f3d55b2f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70d1ddd3-3c')#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.692 2 DEBUG nova.virt.libvirt.driver [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.692 2 DEBUG nova.virt.libvirt.driver [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.693 2 DEBUG nova.virt.libvirt.driver [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] No VIF found with MAC fa:16:3e:34:0c:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 13 11:47:01 np0005485008 nova_compute[192512]: 2025-10-13 15:47:01.694 2 INFO nova.virt.libvirt.driver [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Using config drive#033[00m
Oct 13 11:47:02 np0005485008 nova_compute[192512]: 2025-10-13 15:47:02.745 2 INFO nova.virt.libvirt.driver [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Creating config drive at /var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b/disk.config#033[00m
Oct 13 11:47:02 np0005485008 nova_compute[192512]: 2025-10-13 15:47:02.755 2 DEBUG oslo_concurrency.processutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi0tj28eu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:47:02 np0005485008 nova_compute[192512]: 2025-10-13 15:47:02.897 2 DEBUG oslo_concurrency.processutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi0tj28eu" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:47:02 np0005485008 kernel: tap70d1ddd3-3c: entered promiscuous mode
Oct 13 11:47:02 np0005485008 NetworkManager[51587]: <info>  [1760370422.9804] manager: (tap70d1ddd3-3c): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Oct 13 11:47:02 np0005485008 nova_compute[192512]: 2025-10-13 15:47:02.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:02 np0005485008 ovn_controller[94758]: 2025-10-13T15:47:02Z|00092|binding|INFO|Claiming lport 70d1ddd3-3cab-4861-85ab-3f675f354de4 for this chassis.
Oct 13 11:47:02 np0005485008 ovn_controller[94758]: 2025-10-13T15:47:02Z|00093|binding|INFO|70d1ddd3-3cab-4861-85ab-3f675f354de4: Claiming fa:16:3e:34:0c:44 10.100.0.7
Oct 13 11:47:02 np0005485008 nova_compute[192512]: 2025-10-13 15:47:02.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:02 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:02.995 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:0c:44 10.100.0.7'], port_security=['fa:16:3e:34:0c:44 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '459a4cbd-09f2-4799-bf21-3ac25d43c07b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76535e3e-566d-493a-b5f6-2e93f3d55b2f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f962bf147c5440db74b91b536548537', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd1a80506-3618-4589-be48-948e60f39c11', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1487732c-673b-4519-b2a7-2a296cda2e3f, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=70d1ddd3-3cab-4861-85ab-3f675f354de4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:47:02 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:02.997 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 70d1ddd3-3cab-4861-85ab-3f675f354de4 in datapath 76535e3e-566d-493a-b5f6-2e93f3d55b2f bound to our chassis#033[00m
Oct 13 11:47:02 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:02.998 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 76535e3e-566d-493a-b5f6-2e93f3d55b2f#033[00m
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:03.009 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[b1ba88be-67bf-4b1b-b183-88e5952ee98f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:03.010 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap76535e3e-51 in ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:03.013 214965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap76535e3e-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:03.013 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[0229a259-e96e-45f2-b8d8-3e60ec428daf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:03.014 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[96ff249a-57bc-4d81-a2c3-7ea07a63a85d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:47:03 np0005485008 systemd-udevd[216770]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:47:03 np0005485008 systemd-machined[152551]: New machine qemu-6-instance-00000008.
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:03.028 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[870e7e40-44ba-433f-8511-dddd8b80280d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:47:03 np0005485008 nova_compute[192512]: 2025-10-13 15:47:03.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:03 np0005485008 ovn_controller[94758]: 2025-10-13T15:47:03Z|00094|binding|INFO|Setting lport 70d1ddd3-3cab-4861-85ab-3f675f354de4 ovn-installed in OVS
Oct 13 11:47:03 np0005485008 ovn_controller[94758]: 2025-10-13T15:47:03Z|00095|binding|INFO|Setting lport 70d1ddd3-3cab-4861-85ab-3f675f354de4 up in Southbound
Oct 13 11:47:03 np0005485008 systemd[1]: Started Virtual Machine qemu-6-instance-00000008.
Oct 13 11:47:03 np0005485008 nova_compute[192512]: 2025-10-13 15:47:03.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:03 np0005485008 NetworkManager[51587]: <info>  [1760370423.0518] device (tap70d1ddd3-3c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 11:47:03 np0005485008 NetworkManager[51587]: <info>  [1760370423.0528] device (tap70d1ddd3-3c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:03.049 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[239086ee-50c0-45f3-af32-5292adfd52c5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:03.089 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[d73e59ed-3379-4434-b839-679709d99cbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:47:03 np0005485008 NetworkManager[51587]: <info>  [1760370423.0971] manager: (tap76535e3e-50): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Oct 13 11:47:03 np0005485008 systemd-udevd[216773]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:03.095 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[4670acf4-5e6c-44e2-acbe-14283b376adf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:03.130 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[a339dd2a-8e16-40f3-8c34-c2a7013b1b76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:03.134 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[1b166045-09ef-4913-bd34-2fa1e8e5a234]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:47:03 np0005485008 NetworkManager[51587]: <info>  [1760370423.1580] device (tap76535e3e-50): carrier: link connected
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:03.166 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[796fd42f-2bbf-4eee-b500-4fd28b24ed08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:03.187 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[7f73d6ef-b20e-46f3-af1f-2b02d46af532]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76535e3e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:e4:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396082, 'reachable_time': 24504, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 216801, 'error': None, 'target': 'ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:03.205 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f8a060-f235-433e-9dd8-6a2b7f86a3d5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:e494'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396082, 'tstamp': 396082}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 216802, 'error': None, 'target': 'ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:03.224 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[542bc198-450f-42cd-b33a-bb5aa9f80e00]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76535e3e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:e4:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396082, 'reachable_time': 24504, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 216803, 'error': None, 'target': 'ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:03.260 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[38450f15-e8d2-46df-99e7-559f74d9b694]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:03.340 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[851b955a-d511-4a90-9471-802c1e1b47b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:03.341 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76535e3e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:03.342 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:03.342 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76535e3e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:47:03 np0005485008 NetworkManager[51587]: <info>  [1760370423.3445] manager: (tap76535e3e-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Oct 13 11:47:03 np0005485008 nova_compute[192512]: 2025-10-13 15:47:03.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:03 np0005485008 kernel: tap76535e3e-50: entered promiscuous mode
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:03.346 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap76535e3e-50, col_values=(('external_ids', {'iface-id': '11e220c3-3c4e-4125-9481-2c127fd1068f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:47:03 np0005485008 ovn_controller[94758]: 2025-10-13T15:47:03Z|00096|binding|INFO|Releasing lport 11e220c3-3c4e-4125-9481-2c127fd1068f from this chassis (sb_readonly=0)
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:03.349 103642 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/76535e3e-566d-493a-b5f6-2e93f3d55b2f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/76535e3e-566d-493a-b5f6-2e93f3d55b2f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:03.350 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[50a501d4-2337-4673-a033-ffb526d2745c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:03.351 103642 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: global
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]:    log         /dev/log local0 debug
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]:    log-tag     haproxy-metadata-proxy-76535e3e-566d-493a-b5f6-2e93f3d55b2f
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]:    user        root
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]:    group       root
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]:    maxconn     1024
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]:    pidfile     /var/lib/neutron/external/pids/76535e3e-566d-493a-b5f6-2e93f3d55b2f.pid.haproxy
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]:    daemon
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: defaults
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]:    log global
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]:    mode http
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]:    option httplog
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]:    option dontlognull
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]:    option http-server-close
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]:    option forwardfor
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]:    retries                 3
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]:    timeout http-request    30s
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]:    timeout connect         30s
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]:    timeout client          32s
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]:    timeout server          32s
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]:    timeout http-keep-alive 30s
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: listen listener
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]:    bind 169.254.169.254:80
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]:    server metadata /var/lib/neutron/metadata_proxy
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]:    http-request add-header X-OVN-Network-ID 76535e3e-566d-493a-b5f6-2e93f3d55b2f
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 13 11:47:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:03.352 103642 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f', 'env', 'PROCESS_TAG=haproxy-76535e3e-566d-493a-b5f6-2e93f3d55b2f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/76535e3e-566d-493a-b5f6-2e93f3d55b2f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 13 11:47:03 np0005485008 nova_compute[192512]: 2025-10-13 15:47:03.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:03 np0005485008 nova_compute[192512]: 2025-10-13 15:47:03.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:47:03 np0005485008 nova_compute[192512]: 2025-10-13 15:47:03.623 2 DEBUG nova.compute.manager [req-d60be5ea-bfa3-433d-a8eb-f1b30836557d req-6921784f-3aff-4b4a-aaeb-32124712325e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Received event network-vif-plugged-70d1ddd3-3cab-4861-85ab-3f675f354de4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:47:03 np0005485008 nova_compute[192512]: 2025-10-13 15:47:03.623 2 DEBUG oslo_concurrency.lockutils [req-d60be5ea-bfa3-433d-a8eb-f1b30836557d req-6921784f-3aff-4b4a-aaeb-32124712325e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "459a4cbd-09f2-4799-bf21-3ac25d43c07b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:47:03 np0005485008 nova_compute[192512]: 2025-10-13 15:47:03.623 2 DEBUG oslo_concurrency.lockutils [req-d60be5ea-bfa3-433d-a8eb-f1b30836557d req-6921784f-3aff-4b4a-aaeb-32124712325e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "459a4cbd-09f2-4799-bf21-3ac25d43c07b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:47:03 np0005485008 nova_compute[192512]: 2025-10-13 15:47:03.624 2 DEBUG oslo_concurrency.lockutils [req-d60be5ea-bfa3-433d-a8eb-f1b30836557d req-6921784f-3aff-4b4a-aaeb-32124712325e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "459a4cbd-09f2-4799-bf21-3ac25d43c07b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:47:03 np0005485008 nova_compute[192512]: 2025-10-13 15:47:03.624 2 DEBUG nova.compute.manager [req-d60be5ea-bfa3-433d-a8eb-f1b30836557d req-6921784f-3aff-4b4a-aaeb-32124712325e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Processing event network-vif-plugged-70d1ddd3-3cab-4861-85ab-3f675f354de4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 13 11:47:03 np0005485008 podman[216835]: 2025-10-13 15:47:03.775191774 +0000 UTC m=+0.080012687 container create 7fb7a85ac72e91deaac50c6f34dd71656125dc28a1853d56edba7766176ad0cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 13 11:47:03 np0005485008 nova_compute[192512]: 2025-10-13 15:47:03.802 2 DEBUG nova.network.neutron [req-d75d9ba3-60ab-4ff8-80d2-9c62e55b12e9 req-aa7934ea-e184-4600-b547-7ae229014828 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Updated VIF entry in instance network info cache for port 70d1ddd3-3cab-4861-85ab-3f675f354de4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 13 11:47:03 np0005485008 nova_compute[192512]: 2025-10-13 15:47:03.802 2 DEBUG nova.network.neutron [req-d75d9ba3-60ab-4ff8-80d2-9c62e55b12e9 req-aa7934ea-e184-4600-b547-7ae229014828 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Updating instance_info_cache with network_info: [{"id": "70d1ddd3-3cab-4861-85ab-3f675f354de4", "address": "fa:16:3e:34:0c:44", "network": {"id": "76535e3e-566d-493a-b5f6-2e93f3d55b2f", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1507903864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06a436ade88844f0944edc05940f1d7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d1ddd3-3c", "ovs_interfaceid": "70d1ddd3-3cab-4861-85ab-3f675f354de4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:47:03 np0005485008 nova_compute[192512]: 2025-10-13 15:47:03.822 2 DEBUG oslo_concurrency.lockutils [req-d75d9ba3-60ab-4ff8-80d2-9c62e55b12e9 req-aa7934ea-e184-4600-b547-7ae229014828 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-459a4cbd-09f2-4799-bf21-3ac25d43c07b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:47:03 np0005485008 systemd[1]: Started libpod-conmon-7fb7a85ac72e91deaac50c6f34dd71656125dc28a1853d56edba7766176ad0cd.scope.
Oct 13 11:47:03 np0005485008 podman[216835]: 2025-10-13 15:47:03.740679512 +0000 UTC m=+0.045500455 image pull f5d8e43d5fd113645e71c7e8980543c233298a7561a4c95ab3af27cb119e70b7 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 13 11:47:03 np0005485008 systemd[1]: Started libcrun container.
Oct 13 11:47:03 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32256c52a1c37c26f3fd49a68ccc77994bcb1ef1f4ebd25b1e8709d6fdc6df2e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 11:47:03 np0005485008 podman[216835]: 2025-10-13 15:47:03.873002582 +0000 UTC m=+0.177823465 container init 7fb7a85ac72e91deaac50c6f34dd71656125dc28a1853d56edba7766176ad0cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 11:47:03 np0005485008 podman[216835]: 2025-10-13 15:47:03.878841124 +0000 UTC m=+0.183662007 container start 7fb7a85ac72e91deaac50c6f34dd71656125dc28a1853d56edba7766176ad0cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 13 11:47:03 np0005485008 neutron-haproxy-ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f[216850]: [NOTICE]   (216854) : New worker (216856) forked
Oct 13 11:47:03 np0005485008 neutron-haproxy-ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f[216850]: [NOTICE]   (216854) : Loading success.
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.512 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370424.5120108, 459a4cbd-09f2-4799-bf21-3ac25d43c07b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.513 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] VM Started (Lifecycle Event)#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.515 2 DEBUG nova.compute.manager [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.518 2 DEBUG nova.virt.libvirt.driver [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.521 2 INFO nova.virt.libvirt.driver [-] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Instance spawned successfully.#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.522 2 DEBUG nova.virt.libvirt.driver [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.556 2 DEBUG nova.virt.libvirt.driver [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.557 2 DEBUG nova.virt.libvirt.driver [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.557 2 DEBUG nova.virt.libvirt.driver [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.558 2 DEBUG nova.virt.libvirt.driver [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.558 2 DEBUG nova.virt.libvirt.driver [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.559 2 DEBUG nova.virt.libvirt.driver [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.569 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.572 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.595 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.595 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370424.512162, 459a4cbd-09f2-4799-bf21-3ac25d43c07b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.596 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] VM Paused (Lifecycle Event)#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.626 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.628 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370424.517996, 459a4cbd-09f2-4799-bf21-3ac25d43c07b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.629 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] VM Resumed (Lifecycle Event)#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.649 2 INFO nova.compute.manager [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Took 13.07 seconds to spawn the instance on the hypervisor.#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.650 2 DEBUG nova.compute.manager [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.654 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.662 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.707 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.732 2 INFO nova.compute.manager [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Took 13.75 seconds to build instance.#033[00m
Oct 13 11:47:04 np0005485008 nova_compute[192512]: 2025-10-13 15:47:04.753 2 DEBUG oslo_concurrency.lockutils [None req-f19103cd-427d-4798-a56c-a53f4ae75208 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lock "459a4cbd-09f2-4799-bf21-3ac25d43c07b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:47:05 np0005485008 podman[202884]: time="2025-10-13T15:47:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:47:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:47:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 11:47:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:47:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3450 "" "Go-http-client/1.1"
Oct 13 11:47:05 np0005485008 nova_compute[192512]: 2025-10-13 15:47:05.743 2 DEBUG nova.compute.manager [req-14e60bf2-2146-4f24-9b9c-3ebb30e93b98 req-1c739988-fe1c-49fc-a8c3-fe12e5d93aaf 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Received event network-vif-plugged-70d1ddd3-3cab-4861-85ab-3f675f354de4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:47:05 np0005485008 nova_compute[192512]: 2025-10-13 15:47:05.745 2 DEBUG oslo_concurrency.lockutils [req-14e60bf2-2146-4f24-9b9c-3ebb30e93b98 req-1c739988-fe1c-49fc-a8c3-fe12e5d93aaf 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "459a4cbd-09f2-4799-bf21-3ac25d43c07b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:47:05 np0005485008 nova_compute[192512]: 2025-10-13 15:47:05.745 2 DEBUG oslo_concurrency.lockutils [req-14e60bf2-2146-4f24-9b9c-3ebb30e93b98 req-1c739988-fe1c-49fc-a8c3-fe12e5d93aaf 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "459a4cbd-09f2-4799-bf21-3ac25d43c07b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:47:05 np0005485008 nova_compute[192512]: 2025-10-13 15:47:05.745 2 DEBUG oslo_concurrency.lockutils [req-14e60bf2-2146-4f24-9b9c-3ebb30e93b98 req-1c739988-fe1c-49fc-a8c3-fe12e5d93aaf 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "459a4cbd-09f2-4799-bf21-3ac25d43c07b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:47:05 np0005485008 nova_compute[192512]: 2025-10-13 15:47:05.746 2 DEBUG nova.compute.manager [req-14e60bf2-2146-4f24-9b9c-3ebb30e93b98 req-1c739988-fe1c-49fc-a8c3-fe12e5d93aaf 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] No waiting events found dispatching network-vif-plugged-70d1ddd3-3cab-4861-85ab-3f675f354de4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:47:05 np0005485008 nova_compute[192512]: 2025-10-13 15:47:05.746 2 WARNING nova.compute.manager [req-14e60bf2-2146-4f24-9b9c-3ebb30e93b98 req-1c739988-fe1c-49fc-a8c3-fe12e5d93aaf 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Received unexpected event network-vif-plugged-70d1ddd3-3cab-4861-85ab-3f675f354de4 for instance with vm_state active and task_state None.#033[00m
Oct 13 11:47:05 np0005485008 nova_compute[192512]: 2025-10-13 15:47:05.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:06 np0005485008 nova_compute[192512]: 2025-10-13 15:47:06.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:47:06 np0005485008 nova_compute[192512]: 2025-10-13 15:47:06.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:07 np0005485008 nova_compute[192512]: 2025-10-13 15:47:07.425 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:47:07 np0005485008 nova_compute[192512]: 2025-10-13 15:47:07.426 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:47:07 np0005485008 nova_compute[192512]: 2025-10-13 15:47:07.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 11:47:07 np0005485008 nova_compute[192512]: 2025-10-13 15:47:07.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 11:47:08 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:08.255 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:47:08 np0005485008 nova_compute[192512]: 2025-10-13 15:47:08.330 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "refresh_cache-459a4cbd-09f2-4799-bf21-3ac25d43c07b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:47:08 np0005485008 nova_compute[192512]: 2025-10-13 15:47:08.331 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquired lock "refresh_cache-459a4cbd-09f2-4799-bf21-3ac25d43c07b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:47:08 np0005485008 nova_compute[192512]: 2025-10-13 15:47:08.331 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 13 11:47:08 np0005485008 nova_compute[192512]: 2025-10-13 15:47:08.331 2 DEBUG nova.objects.instance [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 459a4cbd-09f2-4799-bf21-3ac25d43c07b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:47:10 np0005485008 nova_compute[192512]: 2025-10-13 15:47:10.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:11 np0005485008 nova_compute[192512]: 2025-10-13 15:47:11.154 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Updating instance_info_cache with network_info: [{"id": "70d1ddd3-3cab-4861-85ab-3f675f354de4", "address": "fa:16:3e:34:0c:44", "network": {"id": "76535e3e-566d-493a-b5f6-2e93f3d55b2f", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1507903864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06a436ade88844f0944edc05940f1d7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d1ddd3-3c", "ovs_interfaceid": "70d1ddd3-3cab-4861-85ab-3f675f354de4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:47:11 np0005485008 nova_compute[192512]: 2025-10-13 15:47:11.200 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Releasing lock "refresh_cache-459a4cbd-09f2-4799-bf21-3ac25d43c07b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:47:11 np0005485008 nova_compute[192512]: 2025-10-13 15:47:11.201 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 13 11:47:11 np0005485008 nova_compute[192512]: 2025-10-13 15:47:11.202 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:47:11 np0005485008 nova_compute[192512]: 2025-10-13 15:47:11.202 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:47:11 np0005485008 nova_compute[192512]: 2025-10-13 15:47:11.234 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:47:11 np0005485008 nova_compute[192512]: 2025-10-13 15:47:11.235 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:47:11 np0005485008 nova_compute[192512]: 2025-10-13 15:47:11.235 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:47:11 np0005485008 nova_compute[192512]: 2025-10-13 15:47:11.235 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:47:11 np0005485008 nova_compute[192512]: 2025-10-13 15:47:11.321 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:47:11 np0005485008 nova_compute[192512]: 2025-10-13 15:47:11.389 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:47:11 np0005485008 nova_compute[192512]: 2025-10-13 15:47:11.390 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:47:11 np0005485008 nova_compute[192512]: 2025-10-13 15:47:11.458 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:47:11 np0005485008 nova_compute[192512]: 2025-10-13 15:47:11.611 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:47:11 np0005485008 nova_compute[192512]: 2025-10-13 15:47:11.613 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5719MB free_disk=73.46918487548828GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:47:11 np0005485008 nova_compute[192512]: 2025-10-13 15:47:11.613 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:47:11 np0005485008 nova_compute[192512]: 2025-10-13 15:47:11.613 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:47:11 np0005485008 nova_compute[192512]: 2025-10-13 15:47:11.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:11 np0005485008 nova_compute[192512]: 2025-10-13 15:47:11.717 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance 459a4cbd-09f2-4799-bf21-3ac25d43c07b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 13 11:47:11 np0005485008 nova_compute[192512]: 2025-10-13 15:47:11.718 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:47:11 np0005485008 nova_compute[192512]: 2025-10-13 15:47:11.718 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:47:11 np0005485008 nova_compute[192512]: 2025-10-13 15:47:11.822 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:47:11 np0005485008 nova_compute[192512]: 2025-10-13 15:47:11.838 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:47:11 np0005485008 nova_compute[192512]: 2025-10-13 15:47:11.860 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 11:47:11 np0005485008 nova_compute[192512]: 2025-10-13 15:47:11.861 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:47:13 np0005485008 podman[216880]: 2025-10-13 15:47:13.768440822 +0000 UTC m=+0.062770210 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_id=edpm, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, vcs-type=git, version=9.6, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7)
Oct 13 11:47:13 np0005485008 nova_compute[192512]: 2025-10-13 15:47:13.858 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:47:15 np0005485008 ovn_controller[94758]: 2025-10-13T15:47:15Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:34:0c:44 10.100.0.7
Oct 13 11:47:15 np0005485008 ovn_controller[94758]: 2025-10-13T15:47:15Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:34:0c:44 10.100.0.7
Oct 13 11:47:15 np0005485008 nova_compute[192512]: 2025-10-13 15:47:15.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:16 np0005485008 nova_compute[192512]: 2025-10-13 15:47:16.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:47:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:47:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:47:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:47:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:47:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:47:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:47:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:47:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:47:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:47:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:47:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:47:20 np0005485008 nova_compute[192512]: 2025-10-13 15:47:20.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:21 np0005485008 nova_compute[192512]: 2025-10-13 15:47:21.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:25 np0005485008 nova_compute[192512]: 2025-10-13 15:47:25.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:26 np0005485008 nova_compute[192512]: 2025-10-13 15:47:26.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:30 np0005485008 nova_compute[192512]: 2025-10-13 15:47:30.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:30 np0005485008 podman[216913]: 2025-10-13 15:47:30.78652127 +0000 UTC m=+0.077479728 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 11:47:30 np0005485008 podman[216914]: 2025-10-13 15:47:30.799011578 +0000 UTC m=+0.086126166 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 13 11:47:30 np0005485008 podman[216915]: 2025-10-13 15:47:30.803939751 +0000 UTC m=+0.084163815 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 11:47:30 np0005485008 podman[216912]: 2025-10-13 15:47:30.822411165 +0000 UTC m=+0.118600195 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 11:47:30 np0005485008 podman[216920]: 2025-10-13 15:47:30.854398619 +0000 UTC m=+0.128372179 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 11:47:31 np0005485008 nova_compute[192512]: 2025-10-13 15:47:31.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:33 np0005485008 ovn_controller[94758]: 2025-10-13T15:47:33Z|00097|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 13 11:47:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:33.951 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:47:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:33.953 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:47:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:47:33.954 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:47:35 np0005485008 podman[202884]: time="2025-10-13T15:47:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:47:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:47:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 11:47:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:47:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3461 "" "Go-http-client/1.1"
Oct 13 11:47:35 np0005485008 nova_compute[192512]: 2025-10-13 15:47:35.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:36 np0005485008 nova_compute[192512]: 2025-10-13 15:47:36.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:40 np0005485008 nova_compute[192512]: 2025-10-13 15:47:40.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:41 np0005485008 nova_compute[192512]: 2025-10-13 15:47:41.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:44 np0005485008 podman[217012]: 2025-10-13 15:47:44.805584055 +0000 UTC m=+0.092745882 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, container_name=openstack_network_exporter, version=9.6, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.buildah.version=1.33.7, architecture=x86_64)
Oct 13 11:47:45 np0005485008 nova_compute[192512]: 2025-10-13 15:47:45.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:46 np0005485008 nova_compute[192512]: 2025-10-13 15:47:46.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:47:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:47:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:47:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:47:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:47:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:47:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:47:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:47:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:47:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:47:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:47:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:47:50 np0005485008 nova_compute[192512]: 2025-10-13 15:47:50.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:51 np0005485008 nova_compute[192512]: 2025-10-13 15:47:51.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:55 np0005485008 nova_compute[192512]: 2025-10-13 15:47:55.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:47:56 np0005485008 nova_compute[192512]: 2025-10-13 15:47:56.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:00 np0005485008 nova_compute[192512]: 2025-10-13 15:48:00.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:01 np0005485008 nova_compute[192512]: 2025-10-13 15:48:01.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:48:01 np0005485008 nova_compute[192512]: 2025-10-13 15:48:01.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:01 np0005485008 podman[217036]: 2025-10-13 15:48:01.780393017 +0000 UTC m=+0.064198795 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 11:48:01 np0005485008 podman[217034]: 2025-10-13 15:48:01.798919713 +0000 UTC m=+0.089938105 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:48:01 np0005485008 podman[217033]: 2025-10-13 15:48:01.799025306 +0000 UTC m=+0.083017820 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=multipathd)
Oct 13 11:48:01 np0005485008 podman[217035]: 2025-10-13 15:48:01.824615181 +0000 UTC m=+0.108840602 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 13 11:48:01 np0005485008 podman[217037]: 2025-10-13 15:48:01.854559751 +0000 UTC m=+0.133696564 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS)
Oct 13 11:48:04 np0005485008 nova_compute[192512]: 2025-10-13 15:48:04.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:48:04 np0005485008 nova_compute[192512]: 2025-10-13 15:48:04.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:48:04 np0005485008 nova_compute[192512]: 2025-10-13 15:48:04.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:48:04 np0005485008 nova_compute[192512]: 2025-10-13 15:48:04.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 11:48:05 np0005485008 podman[202884]: time="2025-10-13T15:48:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:48:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:48:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 11:48:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:48:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3460 "" "Go-http-client/1.1"
Oct 13 11:48:05 np0005485008 nova_compute[192512]: 2025-10-13 15:48:05.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:06 np0005485008 nova_compute[192512]: 2025-10-13 15:48:06.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:48:06 np0005485008 nova_compute[192512]: 2025-10-13 15:48:06.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:08 np0005485008 nova_compute[192512]: 2025-10-13 15:48:08.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:48:09 np0005485008 nova_compute[192512]: 2025-10-13 15:48:09.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:48:09 np0005485008 nova_compute[192512]: 2025-10-13 15:48:09.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 11:48:09 np0005485008 nova_compute[192512]: 2025-10-13 15:48:09.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 11:48:10 np0005485008 nova_compute[192512]: 2025-10-13 15:48:10.337 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "refresh_cache-459a4cbd-09f2-4799-bf21-3ac25d43c07b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:48:10 np0005485008 nova_compute[192512]: 2025-10-13 15:48:10.338 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquired lock "refresh_cache-459a4cbd-09f2-4799-bf21-3ac25d43c07b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:48:10 np0005485008 nova_compute[192512]: 2025-10-13 15:48:10.338 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 13 11:48:10 np0005485008 nova_compute[192512]: 2025-10-13 15:48:10.338 2 DEBUG nova.objects.instance [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 459a4cbd-09f2-4799-bf21-3ac25d43c07b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:48:10 np0005485008 nova_compute[192512]: 2025-10-13 15:48:10.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:11 np0005485008 nova_compute[192512]: 2025-10-13 15:48:11.507 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Updating instance_info_cache with network_info: [{"id": "70d1ddd3-3cab-4861-85ab-3f675f354de4", "address": "fa:16:3e:34:0c:44", "network": {"id": "76535e3e-566d-493a-b5f6-2e93f3d55b2f", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1507903864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06a436ade88844f0944edc05940f1d7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d1ddd3-3c", "ovs_interfaceid": "70d1ddd3-3cab-4861-85ab-3f675f354de4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:48:11 np0005485008 nova_compute[192512]: 2025-10-13 15:48:11.524 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Releasing lock "refresh_cache-459a4cbd-09f2-4799-bf21-3ac25d43c07b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:48:11 np0005485008 nova_compute[192512]: 2025-10-13 15:48:11.525 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 13 11:48:11 np0005485008 nova_compute[192512]: 2025-10-13 15:48:11.526 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:48:11 np0005485008 nova_compute[192512]: 2025-10-13 15:48:11.526 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:48:11 np0005485008 nova_compute[192512]: 2025-10-13 15:48:11.546 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:48:11 np0005485008 nova_compute[192512]: 2025-10-13 15:48:11.547 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:48:11 np0005485008 nova_compute[192512]: 2025-10-13 15:48:11.547 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:48:11 np0005485008 nova_compute[192512]: 2025-10-13 15:48:11.547 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:48:11 np0005485008 nova_compute[192512]: 2025-10-13 15:48:11.633 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:48:11 np0005485008 nova_compute[192512]: 2025-10-13 15:48:11.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:11 np0005485008 nova_compute[192512]: 2025-10-13 15:48:11.701 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:48:11 np0005485008 nova_compute[192512]: 2025-10-13 15:48:11.702 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:48:11 np0005485008 nova_compute[192512]: 2025-10-13 15:48:11.764 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:48:11 np0005485008 nova_compute[192512]: 2025-10-13 15:48:11.923 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:48:11 np0005485008 nova_compute[192512]: 2025-10-13 15:48:11.925 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5701MB free_disk=73.44103622436523GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:48:11 np0005485008 nova_compute[192512]: 2025-10-13 15:48:11.925 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:48:11 np0005485008 nova_compute[192512]: 2025-10-13 15:48:11.925 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:48:12 np0005485008 nova_compute[192512]: 2025-10-13 15:48:12.002 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance 459a4cbd-09f2-4799-bf21-3ac25d43c07b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 13 11:48:12 np0005485008 nova_compute[192512]: 2025-10-13 15:48:12.002 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:48:12 np0005485008 nova_compute[192512]: 2025-10-13 15:48:12.003 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:48:12 np0005485008 nova_compute[192512]: 2025-10-13 15:48:12.018 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing inventories for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 13 11:48:12 np0005485008 nova_compute[192512]: 2025-10-13 15:48:12.034 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating ProviderTree inventory for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 13 11:48:12 np0005485008 nova_compute[192512]: 2025-10-13 15:48:12.034 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating inventory in ProviderTree for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 13 11:48:12 np0005485008 nova_compute[192512]: 2025-10-13 15:48:12.052 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing aggregate associations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 13 11:48:12 np0005485008 nova_compute[192512]: 2025-10-13 15:48:12.072 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing trait associations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce, traits: HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,HW_CPU_X86_SSE42,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,HW_CPU_X86_BMI2,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 13 11:48:12 np0005485008 nova_compute[192512]: 2025-10-13 15:48:12.120 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:48:12 np0005485008 nova_compute[192512]: 2025-10-13 15:48:12.141 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:48:12 np0005485008 nova_compute[192512]: 2025-10-13 15:48:12.142 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 11:48:12 np0005485008 nova_compute[192512]: 2025-10-13 15:48:12.142 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:48:15 np0005485008 podman[217145]: 2025-10-13 15:48:15.76936985 +0000 UTC m=+0.064031761 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, version=9.6, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible)
Oct 13 11:48:15 np0005485008 nova_compute[192512]: 2025-10-13 15:48:15.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:16 np0005485008 nova_compute[192512]: 2025-10-13 15:48:16.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:48:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:48:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:48:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:48:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:48:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:48:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:48:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:48:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:48:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:48:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:48:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:48:20 np0005485008 nova_compute[192512]: 2025-10-13 15:48:20.725 2 DEBUG nova.virt.libvirt.driver [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Creating tmpfile /var/lib/nova/instances/tmps9fky5cn to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Oct 13 11:48:20 np0005485008 nova_compute[192512]: 2025-10-13 15:48:20.727 2 DEBUG nova.compute.manager [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=73728,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmps9fky5cn',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Oct 13 11:48:20 np0005485008 nova_compute[192512]: 2025-10-13 15:48:20.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:21 np0005485008 nova_compute[192512]: 2025-10-13 15:48:21.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:21 np0005485008 nova_compute[192512]: 2025-10-13 15:48:21.842 2 DEBUG nova.compute.manager [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmps9fky5cn',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='dc61e87f-adf3-48a5-9f4a-59e6f1125246',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Oct 13 11:48:21 np0005485008 nova_compute[192512]: 2025-10-13 15:48:21.884 2 DEBUG oslo_concurrency.lockutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-dc61e87f-adf3-48a5-9f4a-59e6f1125246" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:48:21 np0005485008 nova_compute[192512]: 2025-10-13 15:48:21.884 2 DEBUG oslo_concurrency.lockutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-dc61e87f-adf3-48a5-9f4a-59e6f1125246" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:48:21 np0005485008 nova_compute[192512]: 2025-10-13 15:48:21.885 2 DEBUG nova.network.neutron [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.121 2 DEBUG nova.network.neutron [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Updating instance_info_cache with network_info: [{"id": "b3086faa-e511-43a2-98d2-9d0a7c3197b4", "address": "fa:16:3e:7b:d3:56", "network": {"id": "76535e3e-566d-493a-b5f6-2e93f3d55b2f", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1507903864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06a436ade88844f0944edc05940f1d7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3086faa-e5", "ovs_interfaceid": "b3086faa-e511-43a2-98d2-9d0a7c3197b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.138 2 DEBUG oslo_concurrency.lockutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-dc61e87f-adf3-48a5-9f4a-59e6f1125246" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.139 2 DEBUG nova.virt.libvirt.driver [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmps9fky5cn',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='dc61e87f-adf3-48a5-9f4a-59e6f1125246',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.140 2 DEBUG nova.virt.libvirt.driver [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Creating instance directory: /var/lib/nova/instances/dc61e87f-adf3-48a5-9f4a-59e6f1125246 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.140 2 DEBUG nova.virt.libvirt.driver [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Creating disk.info with the contents: {'/var/lib/nova/instances/dc61e87f-adf3-48a5-9f4a-59e6f1125246/disk': 'qcow2', '/var/lib/nova/instances/dc61e87f-adf3-48a5-9f4a-59e6f1125246/disk.config': 'raw'} pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10854#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.141 2 DEBUG nova.virt.libvirt.driver [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10864#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.141 2 DEBUG nova.objects.instance [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid dc61e87f-adf3-48a5-9f4a-59e6f1125246 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.165 2 DEBUG oslo_concurrency.processutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.258 2 DEBUG oslo_concurrency.processutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.259 2 DEBUG oslo_concurrency.lockutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.259 2 DEBUG oslo_concurrency.lockutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.269 2 DEBUG oslo_concurrency.processutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.329 2 DEBUG oslo_concurrency.processutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.331 2 DEBUG oslo_concurrency.processutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/dc61e87f-adf3-48a5-9f4a-59e6f1125246/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.366 2 DEBUG oslo_concurrency.processutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/dc61e87f-adf3-48a5-9f4a-59e6f1125246/disk 1073741824" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.367 2 DEBUG oslo_concurrency.lockutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.368 2 DEBUG oslo_concurrency.processutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.431 2 DEBUG oslo_concurrency.processutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.432 2 DEBUG nova.virt.disk.api [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Checking if we can resize image /var/lib/nova/instances/dc61e87f-adf3-48a5-9f4a-59e6f1125246/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.432 2 DEBUG oslo_concurrency.processutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc61e87f-adf3-48a5-9f4a-59e6f1125246/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.499 2 DEBUG oslo_concurrency.processutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dc61e87f-adf3-48a5-9f4a-59e6f1125246/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.500 2 DEBUG nova.virt.disk.api [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Cannot resize image /var/lib/nova/instances/dc61e87f-adf3-48a5-9f4a-59e6f1125246/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.501 2 DEBUG nova.objects.instance [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'migration_context' on Instance uuid dc61e87f-adf3-48a5-9f4a-59e6f1125246 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.521 2 DEBUG oslo_concurrency.processutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/dc61e87f-adf3-48a5-9f4a-59e6f1125246/disk.config 485376 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.548 2 DEBUG oslo_concurrency.processutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/dc61e87f-adf3-48a5-9f4a-59e6f1125246/disk.config 485376" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.550 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Copying file compute-0.ctlplane.example.com:/var/lib/nova/instances/dc61e87f-adf3-48a5-9f4a-59e6f1125246/disk.config to /var/lib/nova/instances/dc61e87f-adf3-48a5-9f4a-59e6f1125246 copy_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:103#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.550 2 DEBUG oslo_concurrency.processutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/dc61e87f-adf3-48a5-9f4a-59e6f1125246/disk.config /var/lib/nova/instances/dc61e87f-adf3-48a5-9f4a-59e6f1125246 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.965 2 DEBUG oslo_concurrency.processutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/dc61e87f-adf3-48a5-9f4a-59e6f1125246/disk.config /var/lib/nova/instances/dc61e87f-adf3-48a5-9f4a-59e6f1125246" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.966 2 DEBUG nova.virt.libvirt.driver [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.967 2 DEBUG nova.virt.libvirt.vif [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T15:46:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteBasicStrategy-server-1111506034',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutebasicstrategy-server-1111506034',id=7,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T15:46:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3f962bf147c5440db74b91b536548537',ramdisk_id='',reservation_id='r-ae8fs8l2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteBasicStrategy-1778094526',owner_user_name='tempest-TestExecuteBasicStrategy-1778094526-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:46:36Z,user_data=None,user_id='1f9376a71e3e4f37b4402b4b1dfb68af',uuid=dc61e87f-adf3-48a5-9f4a-59e6f1125246,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3086faa-e511-43a2-98d2-9d0a7c3197b4", "address": "fa:16:3e:7b:d3:56", "network": {"id": "76535e3e-566d-493a-b5f6-2e93f3d55b2f", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1507903864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06a436ade88844f0944edc05940f1d7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapb3086faa-e5", "ovs_interfaceid": "b3086faa-e511-43a2-98d2-9d0a7c3197b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.967 2 DEBUG nova.network.os_vif_util [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converting VIF {"id": "b3086faa-e511-43a2-98d2-9d0a7c3197b4", "address": "fa:16:3e:7b:d3:56", "network": {"id": "76535e3e-566d-493a-b5f6-2e93f3d55b2f", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1507903864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06a436ade88844f0944edc05940f1d7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapb3086faa-e5", "ovs_interfaceid": "b3086faa-e511-43a2-98d2-9d0a7c3197b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.968 2 DEBUG nova.network.os_vif_util [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:d3:56,bridge_name='br-int',has_traffic_filtering=True,id=b3086faa-e511-43a2-98d2-9d0a7c3197b4,network=Network(76535e3e-566d-493a-b5f6-2e93f3d55b2f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3086faa-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.969 2 DEBUG os_vif [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:d3:56,bridge_name='br-int',has_traffic_filtering=True,id=b3086faa-e511-43a2-98d2-9d0a7c3197b4,network=Network(76535e3e-566d-493a-b5f6-2e93f3d55b2f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3086faa-e5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.970 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.970 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.973 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb3086faa-e5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.973 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb3086faa-e5, col_values=(('external_ids', {'iface-id': 'b3086faa-e511-43a2-98d2-9d0a7c3197b4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:d3:56', 'vm-uuid': 'dc61e87f-adf3-48a5-9f4a-59e6f1125246'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:23 np0005485008 NetworkManager[51587]: <info>  [1760370503.9766] manager: (tapb3086faa-e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.983 2 INFO os_vif [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:d3:56,bridge_name='br-int',has_traffic_filtering=True,id=b3086faa-e511-43a2-98d2-9d0a7c3197b4,network=Network(76535e3e-566d-493a-b5f6-2e93f3d55b2f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3086faa-e5')#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.983 2 DEBUG nova.virt.libvirt.driver [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Oct 13 11:48:23 np0005485008 nova_compute[192512]: 2025-10-13 15:48:23.984 2 DEBUG nova.compute.manager [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmps9fky5cn',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='dc61e87f-adf3-48a5-9f4a-59e6f1125246',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Oct 13 11:48:24 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:24.744 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:48:24 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:24.745 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 11:48:24 np0005485008 nova_compute[192512]: 2025-10-13 15:48:24.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:25 np0005485008 nova_compute[192512]: 2025-10-13 15:48:25.027 2 DEBUG nova.network.neutron [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Port b3086faa-e511-43a2-98d2-9d0a7c3197b4 updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Oct 13 11:48:25 np0005485008 nova_compute[192512]: 2025-10-13 15:48:25.029 2 DEBUG nova.compute.manager [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmps9fky5cn',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='dc61e87f-adf3-48a5-9f4a-59e6f1125246',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Oct 13 11:48:25 np0005485008 systemd[1]: Starting libvirt proxy daemon...
Oct 13 11:48:25 np0005485008 systemd[1]: Started libvirt proxy daemon.
Oct 13 11:48:25 np0005485008 kernel: tapb3086faa-e5: entered promiscuous mode
Oct 13 11:48:25 np0005485008 NetworkManager[51587]: <info>  [1760370505.3411] manager: (tapb3086faa-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Oct 13 11:48:25 np0005485008 ovn_controller[94758]: 2025-10-13T15:48:25Z|00098|binding|INFO|Claiming lport b3086faa-e511-43a2-98d2-9d0a7c3197b4 for this additional chassis.
Oct 13 11:48:25 np0005485008 ovn_controller[94758]: 2025-10-13T15:48:25Z|00099|binding|INFO|b3086faa-e511-43a2-98d2-9d0a7c3197b4: Claiming fa:16:3e:7b:d3:56 10.100.0.6
Oct 13 11:48:25 np0005485008 nova_compute[192512]: 2025-10-13 15:48:25.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:25 np0005485008 ovn_controller[94758]: 2025-10-13T15:48:25Z|00100|binding|INFO|Setting lport b3086faa-e511-43a2-98d2-9d0a7c3197b4 ovn-installed in OVS
Oct 13 11:48:25 np0005485008 nova_compute[192512]: 2025-10-13 15:48:25.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:25 np0005485008 nova_compute[192512]: 2025-10-13 15:48:25.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:25 np0005485008 systemd-udevd[217232]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:48:25 np0005485008 NetworkManager[51587]: <info>  [1760370505.3902] device (tapb3086faa-e5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 11:48:25 np0005485008 NetworkManager[51587]: <info>  [1760370505.3925] device (tapb3086faa-e5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 11:48:25 np0005485008 systemd-machined[152551]: New machine qemu-7-instance-00000007.
Oct 13 11:48:25 np0005485008 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Oct 13 11:48:25 np0005485008 nova_compute[192512]: 2025-10-13 15:48:25.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:26 np0005485008 nova_compute[192512]: 2025-10-13 15:48:26.585 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370506.5851479, dc61e87f-adf3-48a5-9f4a-59e6f1125246 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:48:26 np0005485008 nova_compute[192512]: 2025-10-13 15:48:26.586 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] VM Started (Lifecycle Event)#033[00m
Oct 13 11:48:26 np0005485008 nova_compute[192512]: 2025-10-13 15:48:26.605 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:48:27 np0005485008 nova_compute[192512]: 2025-10-13 15:48:27.326 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370507.3257647, dc61e87f-adf3-48a5-9f4a-59e6f1125246 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:48:27 np0005485008 nova_compute[192512]: 2025-10-13 15:48:27.327 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] VM Resumed (Lifecycle Event)#033[00m
Oct 13 11:48:27 np0005485008 nova_compute[192512]: 2025-10-13 15:48:27.361 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:48:27 np0005485008 nova_compute[192512]: 2025-10-13 15:48:27.366 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 11:48:27 np0005485008 nova_compute[192512]: 2025-10-13 15:48:27.402 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct 13 11:48:28 np0005485008 nova_compute[192512]: 2025-10-13 15:48:28.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:48:29Z|00101|binding|INFO|Claiming lport b3086faa-e511-43a2-98d2-9d0a7c3197b4 for this chassis.
Oct 13 11:48:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:48:29Z|00102|binding|INFO|b3086faa-e511-43a2-98d2-9d0a7c3197b4: Claiming fa:16:3e:7b:d3:56 10.100.0.6
Oct 13 11:48:29 np0005485008 ovn_controller[94758]: 2025-10-13T15:48:29Z|00103|binding|INFO|Setting lport b3086faa-e511-43a2-98d2-9d0a7c3197b4 up in Southbound
Oct 13 11:48:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:29.585 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:d3:56 10.100.0.6'], port_security=['fa:16:3e:7b:d3:56 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dc61e87f-adf3-48a5-9f4a-59e6f1125246', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76535e3e-566d-493a-b5f6-2e93f3d55b2f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f962bf147c5440db74b91b536548537', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'd1a80506-3618-4589-be48-948e60f39c11', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1487732c-673b-4519-b2a7-2a296cda2e3f, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=b3086faa-e511-43a2-98d2-9d0a7c3197b4) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:48:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:29.587 103642 INFO neutron.agent.ovn.metadata.agent [-] Port b3086faa-e511-43a2-98d2-9d0a7c3197b4 in datapath 76535e3e-566d-493a-b5f6-2e93f3d55b2f bound to our chassis#033[00m
Oct 13 11:48:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:29.589 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 76535e3e-566d-493a-b5f6-2e93f3d55b2f#033[00m
Oct 13 11:48:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:29.609 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[e97451e9-5e10-4036-8a6d-33e1b5f9c612]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:48:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:29.648 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[faac25cd-92e9-4197-a027-a17b7e801023]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:48:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:29.653 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[99129b38-64c5-4c1d-9670-ab67bcdfddd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:48:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:29.686 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4688bd-de7a-4497-87b1-01cafd6ea4e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:48:29 np0005485008 nova_compute[192512]: 2025-10-13 15:48:29.705 2 INFO nova.compute.manager [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Post operation of migration started#033[00m
Oct 13 11:48:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:29.707 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[8ce74508-4b68-437a-a576-05a98487aedf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76535e3e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:e4:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 15, 'tx_packets': 5, 'rx_bytes': 1126, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 15, 'tx_packets': 5, 'rx_bytes': 1126, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396082, 'reachable_time': 24504, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 217270, 'error': None, 'target': 'ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:48:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:29.727 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[d94e32fb-1c4f-492e-be4b-174822dc98fd]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap76535e3e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396095, 'tstamp': 396095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 217271, 'error': None, 'target': 'ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap76535e3e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396099, 'tstamp': 396099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 217271, 'error': None, 'target': 'ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:48:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:29.729 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76535e3e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:48:29 np0005485008 nova_compute[192512]: 2025-10-13 15:48:29.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:29 np0005485008 nova_compute[192512]: 2025-10-13 15:48:29.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:29.734 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76535e3e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:48:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:29.734 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:48:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:29.735 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap76535e3e-50, col_values=(('external_ids', {'iface-id': '11e220c3-3c4e-4125-9481-2c127fd1068f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:48:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:29.736 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:48:30 np0005485008 nova_compute[192512]: 2025-10-13 15:48:30.056 2 DEBUG oslo_concurrency.lockutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-dc61e87f-adf3-48a5-9f4a-59e6f1125246" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:48:30 np0005485008 nova_compute[192512]: 2025-10-13 15:48:30.057 2 DEBUG oslo_concurrency.lockutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-dc61e87f-adf3-48a5-9f4a-59e6f1125246" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:48:30 np0005485008 nova_compute[192512]: 2025-10-13 15:48:30.057 2 DEBUG nova.network.neutron [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 11:48:30 np0005485008 nova_compute[192512]: 2025-10-13 15:48:30.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:31.747 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:48:32 np0005485008 podman[217274]: 2025-10-13 15:48:32.780536014 +0000 UTC m=+0.065502745 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 13 11:48:32 np0005485008 podman[217272]: 2025-10-13 15:48:32.784550869 +0000 UTC m=+0.074906148 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 11:48:32 np0005485008 podman[217275]: 2025-10-13 15:48:32.79264194 +0000 UTC m=+0.074070912 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 11:48:32 np0005485008 podman[217273]: 2025-10-13 15:48:32.808713229 +0000 UTC m=+0.096754636 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 13 11:48:32 np0005485008 podman[217276]: 2025-10-13 15:48:32.841344562 +0000 UTC m=+0.120408081 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Oct 13 11:48:33 np0005485008 nova_compute[192512]: 2025-10-13 15:48:33.372 2 DEBUG nova.network.neutron [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Updating instance_info_cache with network_info: [{"id": "b3086faa-e511-43a2-98d2-9d0a7c3197b4", "address": "fa:16:3e:7b:d3:56", "network": {"id": "76535e3e-566d-493a-b5f6-2e93f3d55b2f", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1507903864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06a436ade88844f0944edc05940f1d7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3086faa-e5", "ovs_interfaceid": "b3086faa-e511-43a2-98d2-9d0a7c3197b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:48:33 np0005485008 nova_compute[192512]: 2025-10-13 15:48:33.407 2 DEBUG oslo_concurrency.lockutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-dc61e87f-adf3-48a5-9f4a-59e6f1125246" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:48:33 np0005485008 nova_compute[192512]: 2025-10-13 15:48:33.425 2 DEBUG oslo_concurrency.lockutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:48:33 np0005485008 nova_compute[192512]: 2025-10-13 15:48:33.425 2 DEBUG oslo_concurrency.lockutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:48:33 np0005485008 nova_compute[192512]: 2025-10-13 15:48:33.425 2 DEBUG oslo_concurrency.lockutils [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:48:33 np0005485008 nova_compute[192512]: 2025-10-13 15:48:33.431 2 INFO nova.virt.libvirt.driver [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Oct 13 11:48:33 np0005485008 virtqemud[192082]: Domain id=7 name='instance-00000007' uuid=dc61e87f-adf3-48a5-9f4a-59e6f1125246 is tainted: custom-monitor
Oct 13 11:48:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:33.951 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:48:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:33.952 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:48:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:33.953 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:48:34 np0005485008 nova_compute[192512]: 2025-10-13 15:48:34.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:34 np0005485008 nova_compute[192512]: 2025-10-13 15:48:34.442 2 INFO nova.virt.libvirt.driver [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Oct 13 11:48:35 np0005485008 nova_compute[192512]: 2025-10-13 15:48:35.450 2 INFO nova.virt.libvirt.driver [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Oct 13 11:48:35 np0005485008 nova_compute[192512]: 2025-10-13 15:48:35.455 2 DEBUG nova.compute.manager [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:48:35 np0005485008 nova_compute[192512]: 2025-10-13 15:48:35.485 2 DEBUG nova.objects.instance [None req-d03e1f4c-100e-45a7-aa55-541b018653dd f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 13 11:48:35 np0005485008 podman[202884]: time="2025-10-13T15:48:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:48:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:48:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 11:48:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:48:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3467 "" "Go-http-client/1.1"
Oct 13 11:48:35 np0005485008 nova_compute[192512]: 2025-10-13 15:48:35.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:39 np0005485008 nova_compute[192512]: 2025-10-13 15:48:39.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:40 np0005485008 nova_compute[192512]: 2025-10-13 15:48:40.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.122 2 DEBUG oslo_concurrency.lockutils [None req-85fa4fbb-0b68-475a-8dac-fbe06eda3afe 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Acquiring lock "459a4cbd-09f2-4799-bf21-3ac25d43c07b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.123 2 DEBUG oslo_concurrency.lockutils [None req-85fa4fbb-0b68-475a-8dac-fbe06eda3afe 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lock "459a4cbd-09f2-4799-bf21-3ac25d43c07b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.123 2 DEBUG oslo_concurrency.lockutils [None req-85fa4fbb-0b68-475a-8dac-fbe06eda3afe 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Acquiring lock "459a4cbd-09f2-4799-bf21-3ac25d43c07b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.124 2 DEBUG oslo_concurrency.lockutils [None req-85fa4fbb-0b68-475a-8dac-fbe06eda3afe 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lock "459a4cbd-09f2-4799-bf21-3ac25d43c07b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.124 2 DEBUG oslo_concurrency.lockutils [None req-85fa4fbb-0b68-475a-8dac-fbe06eda3afe 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lock "459a4cbd-09f2-4799-bf21-3ac25d43c07b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.126 2 INFO nova.compute.manager [None req-85fa4fbb-0b68-475a-8dac-fbe06eda3afe 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Terminating instance#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.128 2 DEBUG nova.compute.manager [None req-85fa4fbb-0b68-475a-8dac-fbe06eda3afe 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 13 11:48:41 np0005485008 kernel: tap70d1ddd3-3c (unregistering): left promiscuous mode
Oct 13 11:48:41 np0005485008 NetworkManager[51587]: <info>  [1760370521.1539] device (tap70d1ddd3-3c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:41 np0005485008 ovn_controller[94758]: 2025-10-13T15:48:41Z|00104|binding|INFO|Releasing lport 70d1ddd3-3cab-4861-85ab-3f675f354de4 from this chassis (sb_readonly=0)
Oct 13 11:48:41 np0005485008 ovn_controller[94758]: 2025-10-13T15:48:41Z|00105|binding|INFO|Setting lport 70d1ddd3-3cab-4861-85ab-3f675f354de4 down in Southbound
Oct 13 11:48:41 np0005485008 ovn_controller[94758]: 2025-10-13T15:48:41Z|00106|binding|INFO|Removing iface tap70d1ddd3-3c ovn-installed in OVS
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:41 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:41.214 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:0c:44 10.100.0.7'], port_security=['fa:16:3e:34:0c:44 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '459a4cbd-09f2-4799-bf21-3ac25d43c07b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76535e3e-566d-493a-b5f6-2e93f3d55b2f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f962bf147c5440db74b91b536548537', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd1a80506-3618-4589-be48-948e60f39c11', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1487732c-673b-4519-b2a7-2a296cda2e3f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=70d1ddd3-3cab-4861-85ab-3f675f354de4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:48:41 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:41.216 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 70d1ddd3-3cab-4861-85ab-3f675f354de4 in datapath 76535e3e-566d-493a-b5f6-2e93f3d55b2f unbound from our chassis#033[00m
Oct 13 11:48:41 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:41.218 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 76535e3e-566d-493a-b5f6-2e93f3d55b2f#033[00m
Oct 13 11:48:41 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:41.242 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[a69ece61-59ed-4ebf-bac7-87b0023ff0aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:48:41 np0005485008 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000008.scope: Deactivated successfully.
Oct 13 11:48:41 np0005485008 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000008.scope: Consumed 16.259s CPU time.
Oct 13 11:48:41 np0005485008 systemd-machined[152551]: Machine qemu-6-instance-00000008 terminated.
Oct 13 11:48:41 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:41.283 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[de8a9ea9-35c6-4d3f-a604-8d3c2e80aa78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:48:41 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:41.287 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[ed89e92f-98cf-41ad-8cd9-d12c54af17c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:48:41 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:41.319 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[f9aa66dc-d42e-4e57-b032-8ed17a3271a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:48:41 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:41.344 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[0d0c310b-76dc-4b6b-b961-ba436e760127]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76535e3e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:e4:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396082, 'reachable_time': 24504, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 217385, 'error': None, 'target': 'ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:41 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:41.370 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[9090edf2-c4e1-4bdb-8764-41ae5a4ddb85]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap76535e3e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396095, 'tstamp': 396095}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 217388, 'error': None, 'target': 'ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap76535e3e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396099, 'tstamp': 396099}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 217388, 'error': None, 'target': 'ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:48:41 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:41.371 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76535e3e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:41 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:41.383 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76535e3e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:48:41 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:41.383 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:48:41 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:41.383 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap76535e3e-50, col_values=(('external_ids', {'iface-id': '11e220c3-3c4e-4125-9481-2c127fd1068f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:48:41 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:41.384 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.413 2 INFO nova.virt.libvirt.driver [-] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Instance destroyed successfully.#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.414 2 DEBUG nova.objects.instance [None req-85fa4fbb-0b68-475a-8dac-fbe06eda3afe 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lazy-loading 'resources' on Instance uuid 459a4cbd-09f2-4799-bf21-3ac25d43c07b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.432 2 DEBUG nova.virt.libvirt.vif [None req-85fa4fbb-0b68-475a-8dac-fbe06eda3afe 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T15:46:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteBasicStrategy-server-186244157',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutebasicstrategy-server-186244157',id=8,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T15:47:04Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3f962bf147c5440db74b91b536548537',ramdisk_id='',reservation_id='r-izh30cfc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteBasicStrategy-1778094526',owner_user_name='tempest-TestExecuteBasicStrategy-1778094526-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T15:47:04Z,user_data=None,user_id='1f9376a71e3e4f37b4402b4b1dfb68af',uuid=459a4cbd-09f2-4799-bf21-3ac25d43c07b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "70d1ddd3-3cab-4861-85ab-3f675f354de4", "address": "fa:16:3e:34:0c:44", "network": {"id": "76535e3e-566d-493a-b5f6-2e93f3d55b2f", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1507903864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06a436ade88844f0944edc05940f1d7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d1ddd3-3c", "ovs_interfaceid": "70d1ddd3-3cab-4861-85ab-3f675f354de4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.433 2 DEBUG nova.network.os_vif_util [None req-85fa4fbb-0b68-475a-8dac-fbe06eda3afe 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Converting VIF {"id": "70d1ddd3-3cab-4861-85ab-3f675f354de4", "address": "fa:16:3e:34:0c:44", "network": {"id": "76535e3e-566d-493a-b5f6-2e93f3d55b2f", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1507903864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06a436ade88844f0944edc05940f1d7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d1ddd3-3c", "ovs_interfaceid": "70d1ddd3-3cab-4861-85ab-3f675f354de4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.433 2 DEBUG nova.network.os_vif_util [None req-85fa4fbb-0b68-475a-8dac-fbe06eda3afe 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:34:0c:44,bridge_name='br-int',has_traffic_filtering=True,id=70d1ddd3-3cab-4861-85ab-3f675f354de4,network=Network(76535e3e-566d-493a-b5f6-2e93f3d55b2f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70d1ddd3-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.434 2 DEBUG os_vif [None req-85fa4fbb-0b68-475a-8dac-fbe06eda3afe 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:34:0c:44,bridge_name='br-int',has_traffic_filtering=True,id=70d1ddd3-3cab-4861-85ab-3f675f354de4,network=Network(76535e3e-566d-493a-b5f6-2e93f3d55b2f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70d1ddd3-3c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap70d1ddd3-3c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.443 2 INFO os_vif [None req-85fa4fbb-0b68-475a-8dac-fbe06eda3afe 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:34:0c:44,bridge_name='br-int',has_traffic_filtering=True,id=70d1ddd3-3cab-4861-85ab-3f675f354de4,network=Network(76535e3e-566d-493a-b5f6-2e93f3d55b2f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70d1ddd3-3c')#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.444 2 INFO nova.virt.libvirt.driver [None req-85fa4fbb-0b68-475a-8dac-fbe06eda3afe 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Deleting instance files /var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b_del#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.445 2 INFO nova.virt.libvirt.driver [None req-85fa4fbb-0b68-475a-8dac-fbe06eda3afe 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Deletion of /var/lib/nova/instances/459a4cbd-09f2-4799-bf21-3ac25d43c07b_del complete#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.474 2 DEBUG nova.compute.manager [req-07600f98-f314-445b-b84e-f0e596b40ca0 req-d619aaa7-45f4-4b7f-9370-933815e82e76 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Received event network-vif-unplugged-70d1ddd3-3cab-4861-85ab-3f675f354de4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.475 2 DEBUG oslo_concurrency.lockutils [req-07600f98-f314-445b-b84e-f0e596b40ca0 req-d619aaa7-45f4-4b7f-9370-933815e82e76 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "459a4cbd-09f2-4799-bf21-3ac25d43c07b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.475 2 DEBUG oslo_concurrency.lockutils [req-07600f98-f314-445b-b84e-f0e596b40ca0 req-d619aaa7-45f4-4b7f-9370-933815e82e76 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "459a4cbd-09f2-4799-bf21-3ac25d43c07b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.476 2 DEBUG oslo_concurrency.lockutils [req-07600f98-f314-445b-b84e-f0e596b40ca0 req-d619aaa7-45f4-4b7f-9370-933815e82e76 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "459a4cbd-09f2-4799-bf21-3ac25d43c07b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.476 2 DEBUG nova.compute.manager [req-07600f98-f314-445b-b84e-f0e596b40ca0 req-d619aaa7-45f4-4b7f-9370-933815e82e76 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] No waiting events found dispatching network-vif-unplugged-70d1ddd3-3cab-4861-85ab-3f675f354de4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.477 2 DEBUG nova.compute.manager [req-07600f98-f314-445b-b84e-f0e596b40ca0 req-d619aaa7-45f4-4b7f-9370-933815e82e76 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Received event network-vif-unplugged-70d1ddd3-3cab-4861-85ab-3f675f354de4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.500 2 INFO nova.compute.manager [None req-85fa4fbb-0b68-475a-8dac-fbe06eda3afe 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Took 0.37 seconds to destroy the instance on the hypervisor.#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.501 2 DEBUG oslo.service.loopingcall [None req-85fa4fbb-0b68-475a-8dac-fbe06eda3afe 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.501 2 DEBUG nova.compute.manager [-] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 13 11:48:41 np0005485008 nova_compute[192512]: 2025-10-13 15:48:41.502 2 DEBUG nova.network.neutron [-] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 13 11:48:42 np0005485008 nova_compute[192512]: 2025-10-13 15:48:42.032 2 DEBUG nova.network.neutron [-] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:48:42 np0005485008 nova_compute[192512]: 2025-10-13 15:48:42.047 2 INFO nova.compute.manager [-] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Took 0.55 seconds to deallocate network for instance.#033[00m
Oct 13 11:48:42 np0005485008 nova_compute[192512]: 2025-10-13 15:48:42.090 2 DEBUG oslo_concurrency.lockutils [None req-85fa4fbb-0b68-475a-8dac-fbe06eda3afe 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:48:42 np0005485008 nova_compute[192512]: 2025-10-13 15:48:42.091 2 DEBUG oslo_concurrency.lockutils [None req-85fa4fbb-0b68-475a-8dac-fbe06eda3afe 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:48:42 np0005485008 nova_compute[192512]: 2025-10-13 15:48:42.168 2 DEBUG nova.compute.provider_tree [None req-85fa4fbb-0b68-475a-8dac-fbe06eda3afe 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:48:42 np0005485008 nova_compute[192512]: 2025-10-13 15:48:42.183 2 DEBUG nova.scheduler.client.report [None req-85fa4fbb-0b68-475a-8dac-fbe06eda3afe 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:48:42 np0005485008 nova_compute[192512]: 2025-10-13 15:48:42.207 2 DEBUG oslo_concurrency.lockutils [None req-85fa4fbb-0b68-475a-8dac-fbe06eda3afe 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:48:42 np0005485008 nova_compute[192512]: 2025-10-13 15:48:42.237 2 INFO nova.scheduler.client.report [None req-85fa4fbb-0b68-475a-8dac-fbe06eda3afe 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Deleted allocations for instance 459a4cbd-09f2-4799-bf21-3ac25d43c07b#033[00m
Oct 13 11:48:42 np0005485008 nova_compute[192512]: 2025-10-13 15:48:42.302 2 DEBUG oslo_concurrency.lockutils [None req-85fa4fbb-0b68-475a-8dac-fbe06eda3afe 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lock "459a4cbd-09f2-4799-bf21-3ac25d43c07b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:48:42 np0005485008 nova_compute[192512]: 2025-10-13 15:48:42.822 2 DEBUG oslo_concurrency.lockutils [None req-42079055-6d6f-4a9e-8a83-613d066be27a 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Acquiring lock "dc61e87f-adf3-48a5-9f4a-59e6f1125246" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:48:42 np0005485008 nova_compute[192512]: 2025-10-13 15:48:42.822 2 DEBUG oslo_concurrency.lockutils [None req-42079055-6d6f-4a9e-8a83-613d066be27a 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lock "dc61e87f-adf3-48a5-9f4a-59e6f1125246" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:48:42 np0005485008 nova_compute[192512]: 2025-10-13 15:48:42.823 2 DEBUG oslo_concurrency.lockutils [None req-42079055-6d6f-4a9e-8a83-613d066be27a 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Acquiring lock "dc61e87f-adf3-48a5-9f4a-59e6f1125246-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:48:42 np0005485008 nova_compute[192512]: 2025-10-13 15:48:42.823 2 DEBUG oslo_concurrency.lockutils [None req-42079055-6d6f-4a9e-8a83-613d066be27a 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lock "dc61e87f-adf3-48a5-9f4a-59e6f1125246-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:48:42 np0005485008 nova_compute[192512]: 2025-10-13 15:48:42.823 2 DEBUG oslo_concurrency.lockutils [None req-42079055-6d6f-4a9e-8a83-613d066be27a 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lock "dc61e87f-adf3-48a5-9f4a-59e6f1125246-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:48:42 np0005485008 nova_compute[192512]: 2025-10-13 15:48:42.824 2 INFO nova.compute.manager [None req-42079055-6d6f-4a9e-8a83-613d066be27a 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Terminating instance#033[00m
Oct 13 11:48:42 np0005485008 nova_compute[192512]: 2025-10-13 15:48:42.825 2 DEBUG nova.compute.manager [None req-42079055-6d6f-4a9e-8a83-613d066be27a 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 13 11:48:42 np0005485008 kernel: tapb3086faa-e5 (unregistering): left promiscuous mode
Oct 13 11:48:42 np0005485008 NetworkManager[51587]: <info>  [1760370522.8500] device (tapb3086faa-e5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 11:48:42 np0005485008 ovn_controller[94758]: 2025-10-13T15:48:42Z|00107|binding|INFO|Releasing lport b3086faa-e511-43a2-98d2-9d0a7c3197b4 from this chassis (sb_readonly=0)
Oct 13 11:48:42 np0005485008 nova_compute[192512]: 2025-10-13 15:48:42.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:42 np0005485008 ovn_controller[94758]: 2025-10-13T15:48:42Z|00108|binding|INFO|Setting lport b3086faa-e511-43a2-98d2-9d0a7c3197b4 down in Southbound
Oct 13 11:48:42 np0005485008 ovn_controller[94758]: 2025-10-13T15:48:42Z|00109|binding|INFO|Removing iface tapb3086faa-e5 ovn-installed in OVS
Oct 13 11:48:42 np0005485008 nova_compute[192512]: 2025-10-13 15:48:42.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:42.866 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:d3:56 10.100.0.6'], port_security=['fa:16:3e:7b:d3:56 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dc61e87f-adf3-48a5-9f4a-59e6f1125246', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76535e3e-566d-493a-b5f6-2e93f3d55b2f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f962bf147c5440db74b91b536548537', 'neutron:revision_number': '14', 'neutron:security_group_ids': 'd1a80506-3618-4589-be48-948e60f39c11', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1487732c-673b-4519-b2a7-2a296cda2e3f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=b3086faa-e511-43a2-98d2-9d0a7c3197b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:48:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:42.869 103642 INFO neutron.agent.ovn.metadata.agent [-] Port b3086faa-e511-43a2-98d2-9d0a7c3197b4 in datapath 76535e3e-566d-493a-b5f6-2e93f3d55b2f unbound from our chassis#033[00m
Oct 13 11:48:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:42.871 103642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 76535e3e-566d-493a-b5f6-2e93f3d55b2f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 13 11:48:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:42.872 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[573668b5-4c00-4f84-83ad-43241f5d829c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:48:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:42.873 103642 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f namespace which is not needed anymore#033[00m
Oct 13 11:48:42 np0005485008 nova_compute[192512]: 2025-10-13 15:48:42.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:42 np0005485008 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Oct 13 11:48:42 np0005485008 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 2.334s CPU time.
Oct 13 11:48:42 np0005485008 systemd-machined[152551]: Machine qemu-7-instance-00000007 terminated.
Oct 13 11:48:43 np0005485008 neutron-haproxy-ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f[216850]: [NOTICE]   (216854) : haproxy version is 2.8.14-c23fe91
Oct 13 11:48:43 np0005485008 neutron-haproxy-ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f[216850]: [NOTICE]   (216854) : path to executable is /usr/sbin/haproxy
Oct 13 11:48:43 np0005485008 neutron-haproxy-ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f[216850]: [WARNING]  (216854) : Exiting Master process...
Oct 13 11:48:43 np0005485008 neutron-haproxy-ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f[216850]: [ALERT]    (216854) : Current worker (216856) exited with code 143 (Terminated)
Oct 13 11:48:43 np0005485008 neutron-haproxy-ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f[216850]: [WARNING]  (216854) : All workers exited. Exiting... (0)
Oct 13 11:48:43 np0005485008 systemd[1]: libpod-7fb7a85ac72e91deaac50c6f34dd71656125dc28a1853d56edba7766176ad0cd.scope: Deactivated successfully.
Oct 13 11:48:43 np0005485008 podman[217426]: 2025-10-13 15:48:43.013047854 +0000 UTC m=+0.048550764 container died 7fb7a85ac72e91deaac50c6f34dd71656125dc28a1853d56edba7766176ad0cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 11:48:43 np0005485008 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7fb7a85ac72e91deaac50c6f34dd71656125dc28a1853d56edba7766176ad0cd-userdata-shm.mount: Deactivated successfully.
Oct 13 11:48:43 np0005485008 NetworkManager[51587]: <info>  [1760370523.0479] manager: (tapb3086faa-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/43)
Oct 13 11:48:43 np0005485008 systemd[1]: var-lib-containers-storage-overlay-32256c52a1c37c26f3fd49a68ccc77994bcb1ef1f4ebd25b1e8709d6fdc6df2e-merged.mount: Deactivated successfully.
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:43 np0005485008 podman[217426]: 2025-10-13 15:48:43.057362284 +0000 UTC m=+0.092865204 container cleanup 7fb7a85ac72e91deaac50c6f34dd71656125dc28a1853d56edba7766176ad0cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 11:48:43 np0005485008 systemd[1]: libpod-conmon-7fb7a85ac72e91deaac50c6f34dd71656125dc28a1853d56edba7766176ad0cd.scope: Deactivated successfully.
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.089 2 INFO nova.virt.libvirt.driver [-] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Instance destroyed successfully.#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.090 2 DEBUG nova.objects.instance [None req-42079055-6d6f-4a9e-8a83-613d066be27a 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lazy-loading 'resources' on Instance uuid dc61e87f-adf3-48a5-9f4a-59e6f1125246 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.111 2 DEBUG nova.virt.libvirt.vif [None req-42079055-6d6f-4a9e-8a83-613d066be27a 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-13T15:46:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteBasicStrategy-server-1111506034',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutebasicstrategy-server-1111506034',id=7,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T15:46:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3f962bf147c5440db74b91b536548537',ramdisk_id='',reservation_id='r-ae8fs8l2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',clean_attempts='1',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteBasicStrategy-1778094526',owner_user_name='tempest-TestExecuteBasicStrategy-1778094526-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T15:48:35Z,user_data=None,user_id='1f9376a71e3e4f37b4402b4b1dfb68af',uuid=dc61e87f-adf3-48a5-9f4a-59e6f1125246,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3086faa-e511-43a2-98d2-9d0a7c3197b4", "address": "fa:16:3e:7b:d3:56", "network": {"id": "76535e3e-566d-493a-b5f6-2e93f3d55b2f", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1507903864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06a436ade88844f0944edc05940f1d7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3086faa-e5", "ovs_interfaceid": "b3086faa-e511-43a2-98d2-9d0a7c3197b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.112 2 DEBUG nova.network.os_vif_util [None req-42079055-6d6f-4a9e-8a83-613d066be27a 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Converting VIF {"id": "b3086faa-e511-43a2-98d2-9d0a7c3197b4", "address": "fa:16:3e:7b:d3:56", "network": {"id": "76535e3e-566d-493a-b5f6-2e93f3d55b2f", "bridge": "br-int", "label": "tempest-TestExecuteBasicStrategy-1507903864-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06a436ade88844f0944edc05940f1d7a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3086faa-e5", "ovs_interfaceid": "b3086faa-e511-43a2-98d2-9d0a7c3197b4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.113 2 DEBUG nova.network.os_vif_util [None req-42079055-6d6f-4a9e-8a83-613d066be27a 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:d3:56,bridge_name='br-int',has_traffic_filtering=True,id=b3086faa-e511-43a2-98d2-9d0a7c3197b4,network=Network(76535e3e-566d-493a-b5f6-2e93f3d55b2f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3086faa-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.113 2 DEBUG os_vif [None req-42079055-6d6f-4a9e-8a83-613d066be27a 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:d3:56,bridge_name='br-int',has_traffic_filtering=True,id=b3086faa-e511-43a2-98d2-9d0a7c3197b4,network=Network(76535e3e-566d-493a-b5f6-2e93f3d55b2f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3086faa-e5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.115 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3086faa-e5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.123 2 INFO os_vif [None req-42079055-6d6f-4a9e-8a83-613d066be27a 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:d3:56,bridge_name='br-int',has_traffic_filtering=True,id=b3086faa-e511-43a2-98d2-9d0a7c3197b4,network=Network(76535e3e-566d-493a-b5f6-2e93f3d55b2f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3086faa-e5')#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.124 2 INFO nova.virt.libvirt.driver [None req-42079055-6d6f-4a9e-8a83-613d066be27a 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Deleting instance files /var/lib/nova/instances/dc61e87f-adf3-48a5-9f4a-59e6f1125246_del#033[00m
Oct 13 11:48:43 np0005485008 podman[217465]: 2025-10-13 15:48:43.124158685 +0000 UTC m=+0.040988807 container remove 7fb7a85ac72e91deaac50c6f34dd71656125dc28a1853d56edba7766176ad0cd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.124 2 INFO nova.virt.libvirt.driver [None req-42079055-6d6f-4a9e-8a83-613d066be27a 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Deletion of /var/lib/nova/instances/dc61e87f-adf3-48a5-9f4a-59e6f1125246_del complete#033[00m
Oct 13 11:48:43 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:43.131 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[6acdc279-7baa-4c72-a20e-1cc8fa5e7025]: (4, ('Mon Oct 13 03:48:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f (7fb7a85ac72e91deaac50c6f34dd71656125dc28a1853d56edba7766176ad0cd)\n7fb7a85ac72e91deaac50c6f34dd71656125dc28a1853d56edba7766176ad0cd\nMon Oct 13 03:48:43 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f (7fb7a85ac72e91deaac50c6f34dd71656125dc28a1853d56edba7766176ad0cd)\n7fb7a85ac72e91deaac50c6f34dd71656125dc28a1853d56edba7766176ad0cd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:48:43 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:43.132 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[9c2bc746-95b0-49e2-b61d-b2c9e0a9f630]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:48:43 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:43.133 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76535e3e-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:48:43 np0005485008 kernel: tap76535e3e-50: left promiscuous mode
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:43 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:43.149 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[8c75bf26-1676-4a7e-bb22-f907024f6260]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:48:43 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:43.188 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[7136c359-ec65-49d0-a4c6-830d40393e6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:48:43 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:43.190 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[6412cfaa-c7b5-4204-8f3b-524091410a30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:48:43 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:43.206 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[3c966e68-2f72-4452-9f01-580433bba6e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396074, 'reachable_time': 38003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 217485, 'error': None, 'target': 'ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:48:43 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:43.209 103757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-76535e3e-566d-493a-b5f6-2e93f3d55b2f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 13 11:48:43 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:48:43.210 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[05cccf1d-41b6-4f51-ad03-e7f12ee8eaf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:48:43 np0005485008 systemd[1]: run-netns-ovnmeta\x2d76535e3e\x2d566d\x2d493a\x2db5f6\x2d2e93f3d55b2f.mount: Deactivated successfully.
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.296 2 INFO nova.compute.manager [None req-42079055-6d6f-4a9e-8a83-613d066be27a 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Took 0.47 seconds to destroy the instance on the hypervisor.#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.296 2 DEBUG oslo.service.loopingcall [None req-42079055-6d6f-4a9e-8a83-613d066be27a 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.297 2 DEBUG nova.compute.manager [-] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.297 2 DEBUG nova.network.neutron [-] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.518 2 DEBUG nova.compute.manager [req-72976655-a1dd-42d9-928c-f8bef56ea45c req-774058a9-f2de-47ae-b5a8-9ecd14e7350d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Received event network-vif-unplugged-b3086faa-e511-43a2-98d2-9d0a7c3197b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.519 2 DEBUG oslo_concurrency.lockutils [req-72976655-a1dd-42d9-928c-f8bef56ea45c req-774058a9-f2de-47ae-b5a8-9ecd14e7350d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "dc61e87f-adf3-48a5-9f4a-59e6f1125246-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.519 2 DEBUG oslo_concurrency.lockutils [req-72976655-a1dd-42d9-928c-f8bef56ea45c req-774058a9-f2de-47ae-b5a8-9ecd14e7350d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "dc61e87f-adf3-48a5-9f4a-59e6f1125246-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.520 2 DEBUG oslo_concurrency.lockutils [req-72976655-a1dd-42d9-928c-f8bef56ea45c req-774058a9-f2de-47ae-b5a8-9ecd14e7350d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "dc61e87f-adf3-48a5-9f4a-59e6f1125246-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.520 2 DEBUG nova.compute.manager [req-72976655-a1dd-42d9-928c-f8bef56ea45c req-774058a9-f2de-47ae-b5a8-9ecd14e7350d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] No waiting events found dispatching network-vif-unplugged-b3086faa-e511-43a2-98d2-9d0a7c3197b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.520 2 DEBUG nova.compute.manager [req-72976655-a1dd-42d9-928c-f8bef56ea45c req-774058a9-f2de-47ae-b5a8-9ecd14e7350d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Received event network-vif-unplugged-b3086faa-e511-43a2-98d2-9d0a7c3197b4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.626 2 DEBUG nova.compute.manager [req-15f92ceb-42e3-4f55-8d8e-300b689c5916 req-b637f178-d7ac-4f57-9b96-708728ad7104 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Received event network-vif-plugged-70d1ddd3-3cab-4861-85ab-3f675f354de4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.627 2 DEBUG oslo_concurrency.lockutils [req-15f92ceb-42e3-4f55-8d8e-300b689c5916 req-b637f178-d7ac-4f57-9b96-708728ad7104 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "459a4cbd-09f2-4799-bf21-3ac25d43c07b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.627 2 DEBUG oslo_concurrency.lockutils [req-15f92ceb-42e3-4f55-8d8e-300b689c5916 req-b637f178-d7ac-4f57-9b96-708728ad7104 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "459a4cbd-09f2-4799-bf21-3ac25d43c07b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.627 2 DEBUG oslo_concurrency.lockutils [req-15f92ceb-42e3-4f55-8d8e-300b689c5916 req-b637f178-d7ac-4f57-9b96-708728ad7104 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "459a4cbd-09f2-4799-bf21-3ac25d43c07b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.628 2 DEBUG nova.compute.manager [req-15f92ceb-42e3-4f55-8d8e-300b689c5916 req-b637f178-d7ac-4f57-9b96-708728ad7104 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] No waiting events found dispatching network-vif-plugged-70d1ddd3-3cab-4861-85ab-3f675f354de4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.628 2 WARNING nova.compute.manager [req-15f92ceb-42e3-4f55-8d8e-300b689c5916 req-b637f178-d7ac-4f57-9b96-708728ad7104 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Received unexpected event network-vif-plugged-70d1ddd3-3cab-4861-85ab-3f675f354de4 for instance with vm_state deleted and task_state None.#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.628 2 DEBUG nova.compute.manager [req-15f92ceb-42e3-4f55-8d8e-300b689c5916 req-b637f178-d7ac-4f57-9b96-708728ad7104 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Received event network-vif-deleted-70d1ddd3-3cab-4861-85ab-3f675f354de4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.798 2 DEBUG nova.network.neutron [-] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.823 2 INFO nova.compute.manager [-] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Took 0.53 seconds to deallocate network for instance.#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.895 2 DEBUG oslo_concurrency.lockutils [None req-42079055-6d6f-4a9e-8a83-613d066be27a 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.896 2 DEBUG oslo_concurrency.lockutils [None req-42079055-6d6f-4a9e-8a83-613d066be27a 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.904 2 DEBUG oslo_concurrency.lockutils [None req-42079055-6d6f-4a9e-8a83-613d066be27a 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:48:43 np0005485008 nova_compute[192512]: 2025-10-13 15:48:43.946 2 INFO nova.scheduler.client.report [None req-42079055-6d6f-4a9e-8a83-613d066be27a 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Deleted allocations for instance dc61e87f-adf3-48a5-9f4a-59e6f1125246#033[00m
Oct 13 11:48:44 np0005485008 nova_compute[192512]: 2025-10-13 15:48:44.038 2 DEBUG oslo_concurrency.lockutils [None req-42079055-6d6f-4a9e-8a83-613d066be27a 1f9376a71e3e4f37b4402b4b1dfb68af 3f962bf147c5440db74b91b536548537 - - default default] Lock "dc61e87f-adf3-48a5-9f4a-59e6f1125246" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:48:45 np0005485008 nova_compute[192512]: 2025-10-13 15:48:45.606 2 DEBUG nova.compute.manager [req-02e76017-a804-4389-adc5-a2418c8d5de5 req-b720ad76-e58b-440b-801e-feb32bf5a290 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Received event network-vif-plugged-b3086faa-e511-43a2-98d2-9d0a7c3197b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:48:45 np0005485008 nova_compute[192512]: 2025-10-13 15:48:45.606 2 DEBUG oslo_concurrency.lockutils [req-02e76017-a804-4389-adc5-a2418c8d5de5 req-b720ad76-e58b-440b-801e-feb32bf5a290 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "dc61e87f-adf3-48a5-9f4a-59e6f1125246-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:48:45 np0005485008 nova_compute[192512]: 2025-10-13 15:48:45.607 2 DEBUG oslo_concurrency.lockutils [req-02e76017-a804-4389-adc5-a2418c8d5de5 req-b720ad76-e58b-440b-801e-feb32bf5a290 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "dc61e87f-adf3-48a5-9f4a-59e6f1125246-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:48:45 np0005485008 nova_compute[192512]: 2025-10-13 15:48:45.607 2 DEBUG oslo_concurrency.lockutils [req-02e76017-a804-4389-adc5-a2418c8d5de5 req-b720ad76-e58b-440b-801e-feb32bf5a290 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "dc61e87f-adf3-48a5-9f4a-59e6f1125246-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:48:45 np0005485008 nova_compute[192512]: 2025-10-13 15:48:45.607 2 DEBUG nova.compute.manager [req-02e76017-a804-4389-adc5-a2418c8d5de5 req-b720ad76-e58b-440b-801e-feb32bf5a290 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] No waiting events found dispatching network-vif-plugged-b3086faa-e511-43a2-98d2-9d0a7c3197b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:48:45 np0005485008 nova_compute[192512]: 2025-10-13 15:48:45.607 2 WARNING nova.compute.manager [req-02e76017-a804-4389-adc5-a2418c8d5de5 req-b720ad76-e58b-440b-801e-feb32bf5a290 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Received unexpected event network-vif-plugged-b3086faa-e511-43a2-98d2-9d0a7c3197b4 for instance with vm_state deleted and task_state None.#033[00m
Oct 13 11:48:45 np0005485008 nova_compute[192512]: 2025-10-13 15:48:45.711 2 DEBUG nova.compute.manager [req-440904da-fbfd-4696-968f-5a74b4cfd9ec req-ede0ca01-1b0c-4f63-904d-a4f191cfa9fb 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Received event network-vif-deleted-b3086faa-e511-43a2-98d2-9d0a7c3197b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:48:45 np0005485008 nova_compute[192512]: 2025-10-13 15:48:45.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:46 np0005485008 podman[217486]: 2025-10-13 15:48:46.775068634 +0000 UTC m=+0.067734030 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.openshift.tags=minimal rhel9, release=1755695350, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.buildah.version=1.33.7)
Oct 13 11:48:48 np0005485008 nova_compute[192512]: 2025-10-13 15:48:48.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:48:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:48:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:48:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:48:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:48:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:48:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:48:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:48:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:48:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:48:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:48:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:48:50 np0005485008 nova_compute[192512]: 2025-10-13 15:48:50.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:53 np0005485008 nova_compute[192512]: 2025-10-13 15:48:53.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:55 np0005485008 nova_compute[192512]: 2025-10-13 15:48:55.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:48:56 np0005485008 nova_compute[192512]: 2025-10-13 15:48:56.410 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760370521.409379, 459a4cbd-09f2-4799-bf21-3ac25d43c07b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:48:56 np0005485008 nova_compute[192512]: 2025-10-13 15:48:56.411 2 INFO nova.compute.manager [-] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] VM Stopped (Lifecycle Event)#033[00m
Oct 13 11:48:56 np0005485008 nova_compute[192512]: 2025-10-13 15:48:56.433 2 DEBUG nova.compute.manager [None req-bc9a1dc3-160d-46be-86f9-bceeb2846ab5 - - - - - -] [instance: 459a4cbd-09f2-4799-bf21-3ac25d43c07b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:48:58 np0005485008 nova_compute[192512]: 2025-10-13 15:48:58.089 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760370523.0877788, dc61e87f-adf3-48a5-9f4a-59e6f1125246 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:48:58 np0005485008 nova_compute[192512]: 2025-10-13 15:48:58.089 2 INFO nova.compute.manager [-] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] VM Stopped (Lifecycle Event)#033[00m
Oct 13 11:48:58 np0005485008 nova_compute[192512]: 2025-10-13 15:48:58.110 2 DEBUG nova.compute.manager [None req-0c2440f9-9f62-4b0f-9540-bd659a2acf8a - - - - - -] [instance: dc61e87f-adf3-48a5-9f4a-59e6f1125246] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:48:58 np0005485008 nova_compute[192512]: 2025-10-13 15:48:58.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:00 np0005485008 nova_compute[192512]: 2025-10-13 15:49:00.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:03 np0005485008 nova_compute[192512]: 2025-10-13 15:49:03.044 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:49:03 np0005485008 nova_compute[192512]: 2025-10-13 15:49:03.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:03 np0005485008 podman[217510]: 2025-10-13 15:49:03.792379181 +0000 UTC m=+0.075050329 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 13 11:49:03 np0005485008 podman[217511]: 2025-10-13 15:49:03.799042068 +0000 UTC m=+0.080682374 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 11:49:03 np0005485008 podman[217509]: 2025-10-13 15:49:03.811450855 +0000 UTC m=+0.100322026 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.build-date=20251009)
Oct 13 11:49:03 np0005485008 podman[217508]: 2025-10-13 15:49:03.833588925 +0000 UTC m=+0.120322940 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 13 11:49:03 np0005485008 podman[217515]: 2025-10-13 15:49:03.842708809 +0000 UTC m=+0.115292533 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 11:49:04 np0005485008 nova_compute[192512]: 2025-10-13 15:49:04.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:49:04 np0005485008 nova_compute[192512]: 2025-10-13 15:49:04.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:49:04 np0005485008 nova_compute[192512]: 2025-10-13 15:49:04.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 11:49:04 np0005485008 nova_compute[192512]: 2025-10-13 15:49:04.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:05 np0005485008 podman[202884]: time="2025-10-13T15:49:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:49:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:49:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:49:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:49:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2998 "" "Go-http-client/1.1"
Oct 13 11:49:05 np0005485008 nova_compute[192512]: 2025-10-13 15:49:05.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:06 np0005485008 nova_compute[192512]: 2025-10-13 15:49:06.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:49:06 np0005485008 nova_compute[192512]: 2025-10-13 15:49:06.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:49:08 np0005485008 nova_compute[192512]: 2025-10-13 15:49:08.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:09 np0005485008 nova_compute[192512]: 2025-10-13 15:49:09.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:49:09 np0005485008 nova_compute[192512]: 2025-10-13 15:49:09.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:49:09 np0005485008 nova_compute[192512]: 2025-10-13 15:49:09.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 11:49:09 np0005485008 nova_compute[192512]: 2025-10-13 15:49:09.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 11:49:09 np0005485008 nova_compute[192512]: 2025-10-13 15:49:09.441 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 11:49:10 np0005485008 nova_compute[192512]: 2025-10-13 15:49:10.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:11 np0005485008 nova_compute[192512]: 2025-10-13 15:49:11.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:49:11 np0005485008 nova_compute[192512]: 2025-10-13 15:49:11.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:49:11 np0005485008 nova_compute[192512]: 2025-10-13 15:49:11.449 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:49:11 np0005485008 nova_compute[192512]: 2025-10-13 15:49:11.450 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:49:11 np0005485008 nova_compute[192512]: 2025-10-13 15:49:11.450 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:49:11 np0005485008 nova_compute[192512]: 2025-10-13 15:49:11.450 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:49:11 np0005485008 nova_compute[192512]: 2025-10-13 15:49:11.602 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:49:11 np0005485008 nova_compute[192512]: 2025-10-13 15:49:11.605 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5880MB free_disk=73.4699935913086GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:49:11 np0005485008 nova_compute[192512]: 2025-10-13 15:49:11.605 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:49:11 np0005485008 nova_compute[192512]: 2025-10-13 15:49:11.605 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:49:11 np0005485008 nova_compute[192512]: 2025-10-13 15:49:11.663 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:49:11 np0005485008 nova_compute[192512]: 2025-10-13 15:49:11.663 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:49:11 np0005485008 nova_compute[192512]: 2025-10-13 15:49:11.686 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:49:11 np0005485008 nova_compute[192512]: 2025-10-13 15:49:11.703 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:49:11 np0005485008 nova_compute[192512]: 2025-10-13 15:49:11.732 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 11:49:11 np0005485008 nova_compute[192512]: 2025-10-13 15:49:11.733 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:49:13 np0005485008 nova_compute[192512]: 2025-10-13 15:49:13.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:13 np0005485008 nova_compute[192512]: 2025-10-13 15:49:13.728 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:49:15 np0005485008 nova_compute[192512]: 2025-10-13 15:49:15.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:17 np0005485008 podman[217613]: 2025-10-13 15:49:17.796966001 +0000 UTC m=+0.084593226 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, name=ubi9-minimal, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41)
Oct 13 11:49:18 np0005485008 nova_compute[192512]: 2025-10-13 15:49:18.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:49:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:49:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:49:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:49:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:49:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:49:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:49:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:49:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:49:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:49:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:49:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:49:20 np0005485008 nova_compute[192512]: 2025-10-13 15:49:20.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:23 np0005485008 nova_compute[192512]: 2025-10-13 15:49:23.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:24 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:49:24.880 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:49:24 np0005485008 nova_compute[192512]: 2025-10-13 15:49:24.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:24 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:49:24.882 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 11:49:25 np0005485008 nova_compute[192512]: 2025-10-13 15:49:25.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:28 np0005485008 nova_compute[192512]: 2025-10-13 15:49:28.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:30 np0005485008 nova_compute[192512]: 2025-10-13 15:49:30.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:33 np0005485008 nova_compute[192512]: 2025-10-13 15:49:33.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:49:33.885 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:49:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:49:33.952 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:49:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:49:33.953 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:49:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:49:33.953 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:49:34 np0005485008 podman[217635]: 2025-10-13 15:49:34.771445884 +0000 UTC m=+0.071641442 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:49:34 np0005485008 podman[217636]: 2025-10-13 15:49:34.778587278 +0000 UTC m=+0.073562584 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 11:49:34 np0005485008 podman[217634]: 2025-10-13 15:49:34.785958377 +0000 UTC m=+0.089296653 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251009)
Oct 13 11:49:34 np0005485008 podman[217637]: 2025-10-13 15:49:34.805572088 +0000 UTC m=+0.096148457 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 11:49:34 np0005485008 podman[217644]: 2025-10-13 15:49:34.847623378 +0000 UTC m=+0.134062478 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 13 11:49:35 np0005485008 podman[202884]: time="2025-10-13T15:49:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:49:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:49:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:49:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:49:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3000 "" "Go-http-client/1.1"
Oct 13 11:49:35 np0005485008 nova_compute[192512]: 2025-10-13 15:49:35.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:38 np0005485008 nova_compute[192512]: 2025-10-13 15:49:38.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:40 np0005485008 ovn_controller[94758]: 2025-10-13T15:49:40Z|00110|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Oct 13 11:49:40 np0005485008 nova_compute[192512]: 2025-10-13 15:49:40.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:43 np0005485008 nova_compute[192512]: 2025-10-13 15:49:43.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:45 np0005485008 nova_compute[192512]: 2025-10-13 15:49:45.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:48 np0005485008 nova_compute[192512]: 2025-10-13 15:49:48.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:48 np0005485008 podman[217741]: 2025-10-13 15:49:48.76044534 +0000 UTC m=+0.062082574 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, release=1755695350, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 13 11:49:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:49:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:49:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:49:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:49:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:49:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:49:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:49:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:49:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:49:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:49:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:49:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:49:50 np0005485008 nova_compute[192512]: 2025-10-13 15:49:50.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:53 np0005485008 nova_compute[192512]: 2025-10-13 15:49:53.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:55 np0005485008 nova_compute[192512]: 2025-10-13 15:49:55.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:49:58 np0005485008 nova_compute[192512]: 2025-10-13 15:49:58.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:01 np0005485008 nova_compute[192512]: 2025-10-13 15:50:01.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:03 np0005485008 nova_compute[192512]: 2025-10-13 15:50:03.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:04 np0005485008 nova_compute[192512]: 2025-10-13 15:50:04.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:50:04 np0005485008 nova_compute[192512]: 2025-10-13 15:50:04.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:50:04 np0005485008 nova_compute[192512]: 2025-10-13 15:50:04.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:50:04 np0005485008 nova_compute[192512]: 2025-10-13 15:50:04.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 11:50:05 np0005485008 podman[202884]: time="2025-10-13T15:50:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:50:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:50:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:50:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:50:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2995 "" "Go-http-client/1.1"
Oct 13 11:50:05 np0005485008 podman[217765]: 2025-10-13 15:50:05.762586095 +0000 UTC m=+0.065642896 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 11:50:05 np0005485008 podman[217773]: 2025-10-13 15:50:05.77657301 +0000 UTC m=+0.063150028 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 11:50:05 np0005485008 podman[217766]: 2025-10-13 15:50:05.776210289 +0000 UTC m=+0.072395986 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=iscsid, managed_by=edpm_ansible)
Oct 13 11:50:05 np0005485008 podman[217767]: 2025-10-13 15:50:05.800512627 +0000 UTC m=+0.091779941 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:50:05 np0005485008 podman[217774]: 2025-10-13 15:50:05.830951805 +0000 UTC m=+0.117027137 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 11:50:06 np0005485008 nova_compute[192512]: 2025-10-13 15:50:06.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:08 np0005485008 nova_compute[192512]: 2025-10-13 15:50:08.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:08 np0005485008 nova_compute[192512]: 2025-10-13 15:50:08.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:50:08 np0005485008 nova_compute[192512]: 2025-10-13 15:50:08.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:50:08 np0005485008 nova_compute[192512]: 2025-10-13 15:50:08.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:50:09 np0005485008 nova_compute[192512]: 2025-10-13 15:50:09.434 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:50:11 np0005485008 nova_compute[192512]: 2025-10-13 15:50:11.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:11 np0005485008 nova_compute[192512]: 2025-10-13 15:50:11.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:50:11 np0005485008 nova_compute[192512]: 2025-10-13 15:50:11.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 11:50:11 np0005485008 nova_compute[192512]: 2025-10-13 15:50:11.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 11:50:11 np0005485008 nova_compute[192512]: 2025-10-13 15:50:11.445 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 11:50:13 np0005485008 nova_compute[192512]: 2025-10-13 15:50:13.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:13 np0005485008 nova_compute[192512]: 2025-10-13 15:50:13.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:50:13 np0005485008 nova_compute[192512]: 2025-10-13 15:50:13.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:50:13 np0005485008 nova_compute[192512]: 2025-10-13 15:50:13.455 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:50:13 np0005485008 nova_compute[192512]: 2025-10-13 15:50:13.456 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:50:13 np0005485008 nova_compute[192512]: 2025-10-13 15:50:13.456 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:50:13 np0005485008 nova_compute[192512]: 2025-10-13 15:50:13.457 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:50:13 np0005485008 nova_compute[192512]: 2025-10-13 15:50:13.662 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:50:13 np0005485008 nova_compute[192512]: 2025-10-13 15:50:13.663 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5875MB free_disk=73.4699935913086GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:50:13 np0005485008 nova_compute[192512]: 2025-10-13 15:50:13.663 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:50:13 np0005485008 nova_compute[192512]: 2025-10-13 15:50:13.663 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:50:13 np0005485008 nova_compute[192512]: 2025-10-13 15:50:13.763 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:50:13 np0005485008 nova_compute[192512]: 2025-10-13 15:50:13.763 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:50:13 np0005485008 nova_compute[192512]: 2025-10-13 15:50:13.817 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:50:13 np0005485008 nova_compute[192512]: 2025-10-13 15:50:13.846 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:50:13 np0005485008 nova_compute[192512]: 2025-10-13 15:50:13.849 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 11:50:13 np0005485008 nova_compute[192512]: 2025-10-13 15:50:13.849 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:50:16 np0005485008 nova_compute[192512]: 2025-10-13 15:50:16.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:17 np0005485008 nova_compute[192512]: 2025-10-13 15:50:17.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:50:17 np0005485008 nova_compute[192512]: 2025-10-13 15:50:17.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 13 11:50:17 np0005485008 nova_compute[192512]: 2025-10-13 15:50:17.446 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 13 11:50:17 np0005485008 nova_compute[192512]: 2025-10-13 15:50:17.447 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:50:17 np0005485008 nova_compute[192512]: 2025-10-13 15:50:17.447 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 13 11:50:18 np0005485008 nova_compute[192512]: 2025-10-13 15:50:18.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:50:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:50:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:50:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:50:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:50:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:50:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:50:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:50:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:50:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:50:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:50:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:50:19 np0005485008 podman[217864]: 2025-10-13 15:50:19.768443914 +0000 UTC m=+0.070224099 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.buildah.version=1.33.7, vcs-type=git, io.openshift.expose-services=, version=9.6, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41)
Oct 13 11:50:21 np0005485008 nova_compute[192512]: 2025-10-13 15:50:21.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:23 np0005485008 nova_compute[192512]: 2025-10-13 15:50:23.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:26 np0005485008 nova_compute[192512]: 2025-10-13 15:50:26.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:27 np0005485008 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 13 11:50:28 np0005485008 nova_compute[192512]: 2025-10-13 15:50:28.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:31 np0005485008 nova_compute[192512]: 2025-10-13 15:50:31.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:31.525 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:50:31 np0005485008 nova_compute[192512]: 2025-10-13 15:50:31.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:31.526 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 11:50:33 np0005485008 nova_compute[192512]: 2025-10-13 15:50:33.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:33.953 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:50:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:33.954 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:50:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:33.954 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:50:35 np0005485008 podman[202884]: time="2025-10-13T15:50:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:50:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:50:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:50:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:50:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2997 "" "Go-http-client/1.1"
Oct 13 11:50:36 np0005485008 nova_compute[192512]: 2025-10-13 15:50:36.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:36 np0005485008 podman[217887]: 2025-10-13 15:50:36.755319583 +0000 UTC m=+0.060081163 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 11:50:36 np0005485008 podman[217888]: 2025-10-13 15:50:36.756288613 +0000 UTC m=+0.057316656 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 13 11:50:36 np0005485008 podman[217890]: 2025-10-13 15:50:36.763687384 +0000 UTC m=+0.059373001 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 11:50:36 np0005485008 podman[217889]: 2025-10-13 15:50:36.788493127 +0000 UTC m=+0.085489665 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 11:50:36 np0005485008 podman[217891]: 2025-10-13 15:50:36.800297735 +0000 UTC m=+0.093061731 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 11:50:38 np0005485008 nova_compute[192512]: 2025-10-13 15:50:38.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:38 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:38.528 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:50:38 np0005485008 nova_compute[192512]: 2025-10-13 15:50:38.975 2 DEBUG oslo_concurrency.lockutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "c6130d7f-0df2-4945-80e2-7287f84b9cdc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:50:38 np0005485008 nova_compute[192512]: 2025-10-13 15:50:38.976 2 DEBUG oslo_concurrency.lockutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "c6130d7f-0df2-4945-80e2-7287f84b9cdc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.005 2 DEBUG nova.compute.manager [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.281 2 DEBUG oslo_concurrency.lockutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.281 2 DEBUG oslo_concurrency.lockutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.288 2 DEBUG nova.virt.hardware [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.289 2 INFO nova.compute.claims [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.462 2 DEBUG nova.compute.provider_tree [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.490 2 DEBUG nova.scheduler.client.report [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.540 2 DEBUG oslo_concurrency.lockutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.259s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.542 2 DEBUG nova.compute.manager [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.589 2 DEBUG nova.compute.manager [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.590 2 DEBUG nova.network.neutron [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.617 2 INFO nova.virt.libvirt.driver [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.636 2 DEBUG nova.compute.manager [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.741 2 DEBUG nova.compute.manager [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.743 2 DEBUG nova.virt.libvirt.driver [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.743 2 INFO nova.virt.libvirt.driver [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Creating image(s)#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.744 2 DEBUG oslo_concurrency.lockutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "/var/lib/nova/instances/c6130d7f-0df2-4945-80e2-7287f84b9cdc/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.744 2 DEBUG oslo_concurrency.lockutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "/var/lib/nova/instances/c6130d7f-0df2-4945-80e2-7287f84b9cdc/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.745 2 DEBUG oslo_concurrency.lockutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "/var/lib/nova/instances/c6130d7f-0df2-4945-80e2-7287f84b9cdc/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.761 2 DEBUG oslo_concurrency.processutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.835 2 DEBUG oslo_concurrency.processutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.839 2 DEBUG oslo_concurrency.lockutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.840 2 DEBUG oslo_concurrency.lockutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.863 2 DEBUG oslo_concurrency.processutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.923 2 DEBUG oslo_concurrency.processutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.924 2 DEBUG oslo_concurrency.processutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/c6130d7f-0df2-4945-80e2-7287f84b9cdc/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.960 2 DEBUG oslo_concurrency.processutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/c6130d7f-0df2-4945-80e2-7287f84b9cdc/disk 1073741824" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.962 2 DEBUG oslo_concurrency.lockutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:50:39 np0005485008 nova_compute[192512]: 2025-10-13 15:50:39.962 2 DEBUG oslo_concurrency.processutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:50:40 np0005485008 nova_compute[192512]: 2025-10-13 15:50:40.016 2 DEBUG oslo_concurrency.processutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:50:40 np0005485008 nova_compute[192512]: 2025-10-13 15:50:40.017 2 DEBUG nova.virt.disk.api [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Checking if we can resize image /var/lib/nova/instances/c6130d7f-0df2-4945-80e2-7287f84b9cdc/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 11:50:40 np0005485008 nova_compute[192512]: 2025-10-13 15:50:40.018 2 DEBUG oslo_concurrency.processutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c6130d7f-0df2-4945-80e2-7287f84b9cdc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:50:40 np0005485008 nova_compute[192512]: 2025-10-13 15:50:40.076 2 DEBUG oslo_concurrency.processutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c6130d7f-0df2-4945-80e2-7287f84b9cdc/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:50:40 np0005485008 nova_compute[192512]: 2025-10-13 15:50:40.078 2 DEBUG nova.virt.disk.api [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Cannot resize image /var/lib/nova/instances/c6130d7f-0df2-4945-80e2-7287f84b9cdc/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 11:50:40 np0005485008 nova_compute[192512]: 2025-10-13 15:50:40.078 2 DEBUG nova.objects.instance [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lazy-loading 'migration_context' on Instance uuid c6130d7f-0df2-4945-80e2-7287f84b9cdc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:50:40 np0005485008 nova_compute[192512]: 2025-10-13 15:50:40.101 2 DEBUG nova.virt.libvirt.driver [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 13 11:50:40 np0005485008 nova_compute[192512]: 2025-10-13 15:50:40.102 2 DEBUG nova.virt.libvirt.driver [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Ensure instance console log exists: /var/lib/nova/instances/c6130d7f-0df2-4945-80e2-7287f84b9cdc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 13 11:50:40 np0005485008 nova_compute[192512]: 2025-10-13 15:50:40.102 2 DEBUG oslo_concurrency.lockutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:50:40 np0005485008 nova_compute[192512]: 2025-10-13 15:50:40.103 2 DEBUG oslo_concurrency.lockutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:50:40 np0005485008 nova_compute[192512]: 2025-10-13 15:50:40.103 2 DEBUG oslo_concurrency.lockutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:50:41 np0005485008 nova_compute[192512]: 2025-10-13 15:50:41.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:41 np0005485008 nova_compute[192512]: 2025-10-13 15:50:41.459 2 DEBUG nova.network.neutron [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Successfully created port: 7e3dfdb2-c70c-4729-8016-04ff5e6672d1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 13 11:50:42 np0005485008 nova_compute[192512]: 2025-10-13 15:50:42.493 2 DEBUG nova.network.neutron [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Successfully updated port: 7e3dfdb2-c70c-4729-8016-04ff5e6672d1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 13 11:50:42 np0005485008 nova_compute[192512]: 2025-10-13 15:50:42.514 2 DEBUG oslo_concurrency.lockutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "refresh_cache-c6130d7f-0df2-4945-80e2-7287f84b9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:50:42 np0005485008 nova_compute[192512]: 2025-10-13 15:50:42.515 2 DEBUG oslo_concurrency.lockutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquired lock "refresh_cache-c6130d7f-0df2-4945-80e2-7287f84b9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:50:42 np0005485008 nova_compute[192512]: 2025-10-13 15:50:42.515 2 DEBUG nova.network.neutron [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 11:50:42 np0005485008 nova_compute[192512]: 2025-10-13 15:50:42.581 2 DEBUG nova.compute.manager [req-ffd1037e-301a-43d2-98a7-22934c6ed181 req-4e4db562-0435-45ee-9bb8-123af15813b9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Received event network-changed-7e3dfdb2-c70c-4729-8016-04ff5e6672d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:50:42 np0005485008 nova_compute[192512]: 2025-10-13 15:50:42.581 2 DEBUG nova.compute.manager [req-ffd1037e-301a-43d2-98a7-22934c6ed181 req-4e4db562-0435-45ee-9bb8-123af15813b9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Refreshing instance network info cache due to event network-changed-7e3dfdb2-c70c-4729-8016-04ff5e6672d1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 13 11:50:42 np0005485008 nova_compute[192512]: 2025-10-13 15:50:42.582 2 DEBUG oslo_concurrency.lockutils [req-ffd1037e-301a-43d2-98a7-22934c6ed181 req-4e4db562-0435-45ee-9bb8-123af15813b9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-c6130d7f-0df2-4945-80e2-7287f84b9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:50:42 np0005485008 nova_compute[192512]: 2025-10-13 15:50:42.663 2 DEBUG nova.network.neutron [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.802 2 DEBUG nova.network.neutron [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Updating instance_info_cache with network_info: [{"id": "7e3dfdb2-c70c-4729-8016-04ff5e6672d1", "address": "fa:16:3e:cc:58:8a", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3dfdb2-c7", "ovs_interfaceid": "7e3dfdb2-c70c-4729-8016-04ff5e6672d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.875 2 DEBUG oslo_concurrency.lockutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Releasing lock "refresh_cache-c6130d7f-0df2-4945-80e2-7287f84b9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.875 2 DEBUG nova.compute.manager [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Instance network_info: |[{"id": "7e3dfdb2-c70c-4729-8016-04ff5e6672d1", "address": "fa:16:3e:cc:58:8a", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3dfdb2-c7", "ovs_interfaceid": "7e3dfdb2-c70c-4729-8016-04ff5e6672d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.875 2 DEBUG oslo_concurrency.lockutils [req-ffd1037e-301a-43d2-98a7-22934c6ed181 req-4e4db562-0435-45ee-9bb8-123af15813b9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-c6130d7f-0df2-4945-80e2-7287f84b9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.876 2 DEBUG nova.network.neutron [req-ffd1037e-301a-43d2-98a7-22934c6ed181 req-4e4db562-0435-45ee-9bb8-123af15813b9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Refreshing network info cache for port 7e3dfdb2-c70c-4729-8016-04ff5e6672d1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.878 2 DEBUG nova.virt.libvirt.driver [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Start _get_guest_xml network_info=[{"id": "7e3dfdb2-c70c-4729-8016-04ff5e6672d1", "address": "fa:16:3e:cc:58:8a", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3dfdb2-c7", "ovs_interfaceid": "7e3dfdb2-c70c-4729-8016-04ff5e6672d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'boot_index': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': 'dcd9fbd3-16ab-46e1-976e-0576b433c9d5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.883 2 WARNING nova.virt.libvirt.driver [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.889 2 DEBUG nova.virt.libvirt.host [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.889 2 DEBUG nova.virt.libvirt.host [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.893 2 DEBUG nova.virt.libvirt.host [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.893 2 DEBUG nova.virt.libvirt.host [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.894 2 DEBUG nova.virt.libvirt.driver [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.894 2 DEBUG nova.virt.hardware [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-13T15:39:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ddff5c88-cac9-460e-8ffb-1e9b9a7c2a59',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.895 2 DEBUG nova.virt.hardware [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.895 2 DEBUG nova.virt.hardware [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.895 2 DEBUG nova.virt.hardware [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.895 2 DEBUG nova.virt.hardware [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.896 2 DEBUG nova.virt.hardware [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.896 2 DEBUG nova.virt.hardware [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.896 2 DEBUG nova.virt.hardware [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.896 2 DEBUG nova.virt.hardware [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.897 2 DEBUG nova.virt.hardware [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.897 2 DEBUG nova.virt.hardware [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.901 2 DEBUG nova.virt.libvirt.vif [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T15:50:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-2138052222',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-2138052222',id=10,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d42867337c94506bc652a0e84c5f849',ramdisk_id='',reservation_id='r-p1hbh7th',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-870116793',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-870116793-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:50:39Z,user_data=None,user_id='c560a06879cb4d4a861db9e49a3f22ee',uuid=c6130d7f-0df2-4945-80e2-7287f84b9cdc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7e3dfdb2-c70c-4729-8016-04ff5e6672d1", "address": "fa:16:3e:cc:58:8a", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3dfdb2-c7", "ovs_interfaceid": "7e3dfdb2-c70c-4729-8016-04ff5e6672d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.902 2 DEBUG nova.network.os_vif_util [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Converting VIF {"id": "7e3dfdb2-c70c-4729-8016-04ff5e6672d1", "address": "fa:16:3e:cc:58:8a", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3dfdb2-c7", "ovs_interfaceid": "7e3dfdb2-c70c-4729-8016-04ff5e6672d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.902 2 DEBUG nova.network.os_vif_util [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:58:8a,bridge_name='br-int',has_traffic_filtering=True,id=7e3dfdb2-c70c-4729-8016-04ff5e6672d1,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e3dfdb2-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.903 2 DEBUG nova.objects.instance [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lazy-loading 'pci_devices' on Instance uuid c6130d7f-0df2-4945-80e2-7287f84b9cdc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.955 2 DEBUG nova.virt.libvirt.driver [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] End _get_guest_xml xml=<domain type="kvm">
Oct 13 11:50:43 np0005485008 nova_compute[192512]:  <uuid>c6130d7f-0df2-4945-80e2-7287f84b9cdc</uuid>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:  <name>instance-0000000a</name>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:  <memory>131072</memory>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:  <vcpu>1</vcpu>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:  <metadata>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <nova:name>tempest-TestExecuteHostMaintenanceStrategy-server-2138052222</nova:name>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <nova:creationTime>2025-10-13 15:50:43</nova:creationTime>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <nova:flavor name="m1.nano">
Oct 13 11:50:43 np0005485008 nova_compute[192512]:        <nova:memory>128</nova:memory>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:        <nova:disk>1</nova:disk>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:        <nova:swap>0</nova:swap>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:        <nova:ephemeral>0</nova:ephemeral>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:        <nova:vcpus>1</nova:vcpus>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      </nova:flavor>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <nova:owner>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:        <nova:user uuid="c560a06879cb4d4a861db9e49a3f22ee">tempest-TestExecuteHostMaintenanceStrategy-870116793-project-admin</nova:user>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:        <nova:project uuid="4d42867337c94506bc652a0e84c5f849">tempest-TestExecuteHostMaintenanceStrategy-870116793</nova:project>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      </nova:owner>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <nova:root type="image" uuid="dcd9fbd3-16ab-46e1-976e-0576b433c9d5"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <nova:ports>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:        <nova:port uuid="7e3dfdb2-c70c-4729-8016-04ff5e6672d1">
Oct 13 11:50:43 np0005485008 nova_compute[192512]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:        </nova:port>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      </nova:ports>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    </nova:instance>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:  </metadata>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:  <sysinfo type="smbios">
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <system>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <entry name="manufacturer">RDO</entry>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <entry name="product">OpenStack Compute</entry>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <entry name="serial">c6130d7f-0df2-4945-80e2-7287f84b9cdc</entry>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <entry name="uuid">c6130d7f-0df2-4945-80e2-7287f84b9cdc</entry>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <entry name="family">Virtual Machine</entry>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    </system>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:  </sysinfo>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:  <os>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <boot dev="hd"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <smbios mode="sysinfo"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:  </os>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:  <features>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <acpi/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <apic/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <vmcoreinfo/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:  </features>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:  <clock offset="utc">
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <timer name="pit" tickpolicy="delay"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <timer name="hpet" present="no"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:  </clock>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:  <cpu mode="host-model" match="exact">
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <topology sockets="1" cores="1" threads="1"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:  </cpu>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:  <devices>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <disk type="file" device="disk">
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <driver name="qemu" type="qcow2" cache="none"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/c6130d7f-0df2-4945-80e2-7287f84b9cdc/disk"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <target dev="vda" bus="virtio"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    </disk>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <disk type="file" device="cdrom">
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <driver name="qemu" type="raw" cache="none"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/c6130d7f-0df2-4945-80e2-7287f84b9cdc/disk.config"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <target dev="sda" bus="sata"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    </disk>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <interface type="ethernet">
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <mac address="fa:16:3e:cc:58:8a"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <driver name="vhost" rx_queue_size="512"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <mtu size="1442"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <target dev="tap7e3dfdb2-c7"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    </interface>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <serial type="pty">
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <log file="/var/lib/nova/instances/c6130d7f-0df2-4945-80e2-7287f84b9cdc/console.log" append="off"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    </serial>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <video>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    </video>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <input type="tablet" bus="usb"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <rng model="virtio">
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <backend model="random">/dev/urandom</backend>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    </rng>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <controller type="usb" index="0"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    <memballoon model="virtio">
Oct 13 11:50:43 np0005485008 nova_compute[192512]:      <stats period="10"/>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:    </memballoon>
Oct 13 11:50:43 np0005485008 nova_compute[192512]:  </devices>
Oct 13 11:50:43 np0005485008 nova_compute[192512]: </domain>
Oct 13 11:50:43 np0005485008 nova_compute[192512]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.956 2 DEBUG nova.compute.manager [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Preparing to wait for external event network-vif-plugged-7e3dfdb2-c70c-4729-8016-04ff5e6672d1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.957 2 DEBUG oslo_concurrency.lockutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "c6130d7f-0df2-4945-80e2-7287f84b9cdc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.957 2 DEBUG oslo_concurrency.lockutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "c6130d7f-0df2-4945-80e2-7287f84b9cdc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.957 2 DEBUG oslo_concurrency.lockutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "c6130d7f-0df2-4945-80e2-7287f84b9cdc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.958 2 DEBUG nova.virt.libvirt.vif [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T15:50:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-2138052222',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-2138052222',id=10,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d42867337c94506bc652a0e84c5f849',ramdisk_id='',reservation_id='r-p1hbh7th',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-870116793',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-870116793-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:50:39Z,user_data=None,user_id='c560a06879cb4d4a861db9e49a3f22ee',uuid=c6130d7f-0df2-4945-80e2-7287f84b9cdc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7e3dfdb2-c70c-4729-8016-04ff5e6672d1", "address": "fa:16:3e:cc:58:8a", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3dfdb2-c7", "ovs_interfaceid": "7e3dfdb2-c70c-4729-8016-04ff5e6672d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.958 2 DEBUG nova.network.os_vif_util [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Converting VIF {"id": "7e3dfdb2-c70c-4729-8016-04ff5e6672d1", "address": "fa:16:3e:cc:58:8a", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3dfdb2-c7", "ovs_interfaceid": "7e3dfdb2-c70c-4729-8016-04ff5e6672d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.959 2 DEBUG nova.network.os_vif_util [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:58:8a,bridge_name='br-int',has_traffic_filtering=True,id=7e3dfdb2-c70c-4729-8016-04ff5e6672d1,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e3dfdb2-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.959 2 DEBUG os_vif [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:58:8a,bridge_name='br-int',has_traffic_filtering=True,id=7e3dfdb2-c70c-4729-8016-04ff5e6672d1,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e3dfdb2-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.960 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.961 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.965 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e3dfdb2-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.966 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7e3dfdb2-c7, col_values=(('external_ids', {'iface-id': '7e3dfdb2-c70c-4729-8016-04ff5e6672d1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:58:8a', 'vm-uuid': 'c6130d7f-0df2-4945-80e2-7287f84b9cdc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:43 np0005485008 NetworkManager[51587]: <info>  [1760370643.9687] manager: (tap7e3dfdb2-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:43 np0005485008 nova_compute[192512]: 2025-10-13 15:50:43.978 2 INFO os_vif [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:58:8a,bridge_name='br-int',has_traffic_filtering=True,id=7e3dfdb2-c70c-4729-8016-04ff5e6672d1,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e3dfdb2-c7')#033[00m
Oct 13 11:50:44 np0005485008 nova_compute[192512]: 2025-10-13 15:50:44.046 2 DEBUG nova.virt.libvirt.driver [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 11:50:44 np0005485008 nova_compute[192512]: 2025-10-13 15:50:44.046 2 DEBUG nova.virt.libvirt.driver [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 11:50:44 np0005485008 nova_compute[192512]: 2025-10-13 15:50:44.047 2 DEBUG nova.virt.libvirt.driver [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] No VIF found with MAC fa:16:3e:cc:58:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 13 11:50:44 np0005485008 nova_compute[192512]: 2025-10-13 15:50:44.047 2 INFO nova.virt.libvirt.driver [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Using config drive#033[00m
Oct 13 11:50:44 np0005485008 nova_compute[192512]: 2025-10-13 15:50:44.646 2 INFO nova.virt.libvirt.driver [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Creating config drive at /var/lib/nova/instances/c6130d7f-0df2-4945-80e2-7287f84b9cdc/disk.config#033[00m
Oct 13 11:50:44 np0005485008 nova_compute[192512]: 2025-10-13 15:50:44.652 2 DEBUG oslo_concurrency.processutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c6130d7f-0df2-4945-80e2-7287f84b9cdc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl20lkwi8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:50:44 np0005485008 nova_compute[192512]: 2025-10-13 15:50:44.780 2 DEBUG oslo_concurrency.processutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c6130d7f-0df2-4945-80e2-7287f84b9cdc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl20lkwi8" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:50:44 np0005485008 kernel: tap7e3dfdb2-c7: entered promiscuous mode
Oct 13 11:50:44 np0005485008 NetworkManager[51587]: <info>  [1760370644.8367] manager: (tap7e3dfdb2-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/45)
Oct 13 11:50:44 np0005485008 ovn_controller[94758]: 2025-10-13T15:50:44Z|00111|binding|INFO|Claiming lport 7e3dfdb2-c70c-4729-8016-04ff5e6672d1 for this chassis.
Oct 13 11:50:44 np0005485008 ovn_controller[94758]: 2025-10-13T15:50:44Z|00112|binding|INFO|7e3dfdb2-c70c-4729-8016-04ff5e6672d1: Claiming fa:16:3e:cc:58:8a 10.100.0.6
Oct 13 11:50:44 np0005485008 nova_compute[192512]: 2025-10-13 15:50:44.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:44 np0005485008 nova_compute[192512]: 2025-10-13 15:50:44.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:44 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:44.855 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:58:8a 10.100.0.6'], port_security=['fa:16:3e:cc:58:8a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c6130d7f-0df2-4945-80e2-7287f84b9cdc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d42867337c94506bc652a0e84c5f849', 'neutron:revision_number': '2', 'neutron:security_group_ids': '58c21b61-d3dc-4c91-8143-e19c227ff89f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97a3a12d-8a89-43e2-9a6d-7c3bdc21ad02, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=7e3dfdb2-c70c-4729-8016-04ff5e6672d1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:50:44 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:44.856 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 7e3dfdb2-c70c-4729-8016-04ff5e6672d1 in datapath ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6 bound to our chassis#033[00m
Oct 13 11:50:44 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:44.857 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6#033[00m
Oct 13 11:50:44 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:44.870 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[62bdd0d3-4e2c-4d58-b265-9a40eb89bf1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:50:44 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:44.872 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH taped6faec0-81 in ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 13 11:50:44 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:44.874 214965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface taped6faec0-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 13 11:50:44 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:44.874 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[5c44460a-d76d-4665-8059-b744ca0cdaa3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:50:44 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:44.875 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[2289ddae-8809-4338-8246-2c0527617c86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:50:44 np0005485008 systemd-udevd[218026]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:50:44 np0005485008 systemd-machined[152551]: New machine qemu-8-instance-0000000a.
Oct 13 11:50:44 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:44.887 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1fc80c-77ef-4131-a7f2-e3d1fbaa5791]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:50:44 np0005485008 systemd[1]: Started Virtual Machine qemu-8-instance-0000000a.
Oct 13 11:50:44 np0005485008 NetworkManager[51587]: <info>  [1760370644.9018] device (tap7e3dfdb2-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 11:50:44 np0005485008 NetworkManager[51587]: <info>  [1760370644.9028] device (tap7e3dfdb2-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 11:50:44 np0005485008 ovn_controller[94758]: 2025-10-13T15:50:44Z|00113|binding|INFO|Setting lport 7e3dfdb2-c70c-4729-8016-04ff5e6672d1 ovn-installed in OVS
Oct 13 11:50:44 np0005485008 ovn_controller[94758]: 2025-10-13T15:50:44Z|00114|binding|INFO|Setting lport 7e3dfdb2-c70c-4729-8016-04ff5e6672d1 up in Southbound
Oct 13 11:50:44 np0005485008 nova_compute[192512]: 2025-10-13 15:50:44.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:44 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:44.919 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[58a7f74a-2115-43d8-9111-e083faa36f1f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:50:44 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:44.951 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[3d81c2a1-7c09-43a1-bc97-05393d85374b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:50:44 np0005485008 NetworkManager[51587]: <info>  [1760370644.9582] manager: (taped6faec0-80): new Veth device (/org/freedesktop/NetworkManager/Devices/46)
Oct 13 11:50:44 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:44.957 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[e9e293b8-36ae-432c-b7ea-fc24d6aef6b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:50:44 np0005485008 systemd-udevd[218030]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:50:44 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:44.989 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[bb66979b-16f1-4661-aeb4-c17112262cb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:50:44 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:44.993 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[55bb1165-d3a3-47c9-85c6-71f93e7c7e6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:50:45 np0005485008 NetworkManager[51587]: <info>  [1760370645.0221] device (taped6faec0-80): carrier: link connected
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:45.026 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[9950fc6b-17ff-4882-9a62-1215216c2bdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:45.046 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[0899ed2b-6665-4d9d-b9c9-bc1097449338]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped6faec0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:53:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418268, 'reachable_time': 24152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 218058, 'error': None, 'target': 'ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:45.067 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[a83d20fe-fc74-4454-a53b-1b85479b3745]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:5332'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 418268, 'tstamp': 418268}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 218059, 'error': None, 'target': 'ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:45.087 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[6ec8a2dc-8388-4b6f-add0-63d44e661e8d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped6faec0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:53:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418268, 'reachable_time': 24152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 218060, 'error': None, 'target': 'ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:45.118 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[290c3eb2-3950-41e9-a013-9910b62a52d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.125 2 DEBUG nova.compute.manager [req-c1cd1c4c-419d-4b7f-8395-99a85c5e2a08 req-6c898170-3648-48bc-a58e-cd750b8b2295 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Received event network-vif-plugged-7e3dfdb2-c70c-4729-8016-04ff5e6672d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.126 2 DEBUG oslo_concurrency.lockutils [req-c1cd1c4c-419d-4b7f-8395-99a85c5e2a08 req-6c898170-3648-48bc-a58e-cd750b8b2295 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "c6130d7f-0df2-4945-80e2-7287f84b9cdc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.126 2 DEBUG oslo_concurrency.lockutils [req-c1cd1c4c-419d-4b7f-8395-99a85c5e2a08 req-6c898170-3648-48bc-a58e-cd750b8b2295 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "c6130d7f-0df2-4945-80e2-7287f84b9cdc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.126 2 DEBUG oslo_concurrency.lockutils [req-c1cd1c4c-419d-4b7f-8395-99a85c5e2a08 req-6c898170-3648-48bc-a58e-cd750b8b2295 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "c6130d7f-0df2-4945-80e2-7287f84b9cdc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.127 2 DEBUG nova.compute.manager [req-c1cd1c4c-419d-4b7f-8395-99a85c5e2a08 req-6c898170-3648-48bc-a58e-cd750b8b2295 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Processing event network-vif-plugged-7e3dfdb2-c70c-4729-8016-04ff5e6672d1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:45.181 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[beba5ec3-ac89-4d25-af1d-2c6f9299335d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:45.184 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped6faec0-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:45.185 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:45.185 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped6faec0-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:45 np0005485008 NetworkManager[51587]: <info>  [1760370645.1876] manager: (taped6faec0-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Oct 13 11:50:45 np0005485008 kernel: taped6faec0-80: entered promiscuous mode
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:45.192 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped6faec0-80, col_values=(('external_ids', {'iface-id': '6f3de1b2-b216-4652-b39f-5d38c68f9bbc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:45 np0005485008 ovn_controller[94758]: 2025-10-13T15:50:45Z|00115|binding|INFO|Releasing lport 6f3de1b2-b216-4652-b39f-5d38c68f9bbc from this chassis (sb_readonly=0)
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:45.197 103642 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:45.200 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[50e84f0c-05e1-4b38-ab77-0b577e8cff89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:45.202 103642 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]: global
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]:    log         /dev/log local0 debug
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]:    log-tag     haproxy-metadata-proxy-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]:    user        root
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]:    group       root
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]:    maxconn     1024
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]:    pidfile     /var/lib/neutron/external/pids/ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6.pid.haproxy
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]:    daemon
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]: defaults
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]:    log global
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]:    mode http
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]:    option httplog
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]:    option dontlognull
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]:    option http-server-close
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]:    option forwardfor
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]:    retries                 3
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]:    timeout http-request    30s
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]:    timeout connect         30s
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]:    timeout client          32s
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]:    timeout server          32s
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]:    timeout http-keep-alive 30s
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]: listen listener
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]:    bind 169.254.169.254:80
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]:    server metadata /var/lib/neutron/metadata_proxy
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]:    http-request add-header X-OVN-Network-ID ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:45 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:50:45.206 103642 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'env', 'PROCESS_TAG=haproxy-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 13 11:50:45 np0005485008 podman[218099]: 2025-10-13 15:50:45.597048064 +0000 UTC m=+0.061074543 container create d8888cd9a6391d1f1b2fe2471d560df95f2193ee5bf584f5c1d6c5a44352460e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 13 11:50:45 np0005485008 systemd[1]: Started libpod-conmon-d8888cd9a6391d1f1b2fe2471d560df95f2193ee5bf584f5c1d6c5a44352460e.scope.
Oct 13 11:50:45 np0005485008 podman[218099]: 2025-10-13 15:50:45.557729599 +0000 UTC m=+0.021756098 image pull f5d8e43d5fd113645e71c7e8980543c233298a7561a4c95ab3af27cb119e70b7 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 13 11:50:45 np0005485008 systemd[1]: Started libcrun container.
Oct 13 11:50:45 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/661ab61128b92ed2eb7f9bcb1bf817d52c31e4341e8fae4f6698cbd9bbc4f86f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.697 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370645.6973336, c6130d7f-0df2-4945-80e2-7287f84b9cdc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.698 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] VM Started (Lifecycle Event)#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.700 2 DEBUG nova.compute.manager [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 13 11:50:45 np0005485008 podman[218099]: 2025-10-13 15:50:45.70476244 +0000 UTC m=+0.168788939 container init d8888cd9a6391d1f1b2fe2471d560df95f2193ee5bf584f5c1d6c5a44352460e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.704 2 DEBUG nova.virt.libvirt.driver [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.708 2 INFO nova.virt.libvirt.driver [-] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Instance spawned successfully.#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.708 2 DEBUG nova.virt.libvirt.driver [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 13 11:50:45 np0005485008 podman[218099]: 2025-10-13 15:50:45.712121249 +0000 UTC m=+0.176147728 container start d8888cd9a6391d1f1b2fe2471d560df95f2193ee5bf584f5c1d6c5a44352460e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.724 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.731 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 11:50:45 np0005485008 neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6[218114]: [NOTICE]   (218118) : New worker (218120) forked
Oct 13 11:50:45 np0005485008 neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6[218114]: [NOTICE]   (218118) : Loading success.
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.736 2 DEBUG nova.virt.libvirt.driver [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.737 2 DEBUG nova.virt.libvirt.driver [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.737 2 DEBUG nova.virt.libvirt.driver [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.738 2 DEBUG nova.virt.libvirt.driver [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.738 2 DEBUG nova.virt.libvirt.driver [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.739 2 DEBUG nova.virt.libvirt.driver [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.760 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.760 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370645.7001839, c6130d7f-0df2-4945-80e2-7287f84b9cdc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.760 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] VM Paused (Lifecycle Event)#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.783 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.786 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370645.7033849, c6130d7f-0df2-4945-80e2-7287f84b9cdc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.787 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] VM Resumed (Lifecycle Event)#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.791 2 INFO nova.compute.manager [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Took 6.05 seconds to spawn the instance on the hypervisor.#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.792 2 DEBUG nova.compute.manager [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.807 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.810 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.826 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.853 2 INFO nova.compute.manager [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Took 6.62 seconds to build instance.#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.855 2 DEBUG nova.network.neutron [req-ffd1037e-301a-43d2-98a7-22934c6ed181 req-4e4db562-0435-45ee-9bb8-123af15813b9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Updated VIF entry in instance network info cache for port 7e3dfdb2-c70c-4729-8016-04ff5e6672d1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.855 2 DEBUG nova.network.neutron [req-ffd1037e-301a-43d2-98a7-22934c6ed181 req-4e4db562-0435-45ee-9bb8-123af15813b9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Updating instance_info_cache with network_info: [{"id": "7e3dfdb2-c70c-4729-8016-04ff5e6672d1", "address": "fa:16:3e:cc:58:8a", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3dfdb2-c7", "ovs_interfaceid": "7e3dfdb2-c70c-4729-8016-04ff5e6672d1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.877 2 DEBUG oslo_concurrency.lockutils [req-ffd1037e-301a-43d2-98a7-22934c6ed181 req-4e4db562-0435-45ee-9bb8-123af15813b9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-c6130d7f-0df2-4945-80e2-7287f84b9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:50:45 np0005485008 nova_compute[192512]: 2025-10-13 15:50:45.877 2 DEBUG oslo_concurrency.lockutils [None req-258029cc-4675-48e9-8cb4-601a12b75f70 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "c6130d7f-0df2-4945-80e2-7287f84b9cdc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:50:46 np0005485008 nova_compute[192512]: 2025-10-13 15:50:46.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:47 np0005485008 nova_compute[192512]: 2025-10-13 15:50:47.204 2 DEBUG nova.compute.manager [req-9b31e1dc-ae44-4372-b939-5a0937d41e26 req-609c89b9-505c-4eb6-b8dd-39542baaddd4 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Received event network-vif-plugged-7e3dfdb2-c70c-4729-8016-04ff5e6672d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:50:47 np0005485008 nova_compute[192512]: 2025-10-13 15:50:47.205 2 DEBUG oslo_concurrency.lockutils [req-9b31e1dc-ae44-4372-b939-5a0937d41e26 req-609c89b9-505c-4eb6-b8dd-39542baaddd4 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "c6130d7f-0df2-4945-80e2-7287f84b9cdc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:50:47 np0005485008 nova_compute[192512]: 2025-10-13 15:50:47.205 2 DEBUG oslo_concurrency.lockutils [req-9b31e1dc-ae44-4372-b939-5a0937d41e26 req-609c89b9-505c-4eb6-b8dd-39542baaddd4 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "c6130d7f-0df2-4945-80e2-7287f84b9cdc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:50:47 np0005485008 nova_compute[192512]: 2025-10-13 15:50:47.205 2 DEBUG oslo_concurrency.lockutils [req-9b31e1dc-ae44-4372-b939-5a0937d41e26 req-609c89b9-505c-4eb6-b8dd-39542baaddd4 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "c6130d7f-0df2-4945-80e2-7287f84b9cdc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:50:47 np0005485008 nova_compute[192512]: 2025-10-13 15:50:47.206 2 DEBUG nova.compute.manager [req-9b31e1dc-ae44-4372-b939-5a0937d41e26 req-609c89b9-505c-4eb6-b8dd-39542baaddd4 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] No waiting events found dispatching network-vif-plugged-7e3dfdb2-c70c-4729-8016-04ff5e6672d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:50:47 np0005485008 nova_compute[192512]: 2025-10-13 15:50:47.206 2 WARNING nova.compute.manager [req-9b31e1dc-ae44-4372-b939-5a0937d41e26 req-609c89b9-505c-4eb6-b8dd-39542baaddd4 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Received unexpected event network-vif-plugged-7e3dfdb2-c70c-4729-8016-04ff5e6672d1 for instance with vm_state active and task_state None.#033[00m
Oct 13 11:50:48 np0005485008 nova_compute[192512]: 2025-10-13 15:50:48.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:50:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:50:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:50:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:50:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:50:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:50:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:50:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:50:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:50:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:50:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:50:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:50:50 np0005485008 podman[218129]: 2025-10-13 15:50:50.772527012 +0000 UTC m=+0.066567580 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, managed_by=edpm_ansible, name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6)
Oct 13 11:50:51 np0005485008 nova_compute[192512]: 2025-10-13 15:50:51.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:53 np0005485008 nova_compute[192512]: 2025-10-13 15:50:53.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:56 np0005485008 nova_compute[192512]: 2025-10-13 15:50:56.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:50:57 np0005485008 ovn_controller[94758]: 2025-10-13T15:50:57Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cc:58:8a 10.100.0.6
Oct 13 11:50:57 np0005485008 ovn_controller[94758]: 2025-10-13T15:50:57Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cc:58:8a 10.100.0.6
Oct 13 11:50:59 np0005485008 nova_compute[192512]: 2025-10-13 15:50:59.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:01 np0005485008 nova_compute[192512]: 2025-10-13 15:51:01.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:04 np0005485008 nova_compute[192512]: 2025-10-13 15:51:04.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:04 np0005485008 nova_compute[192512]: 2025-10-13 15:51:04.458 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:51:04 np0005485008 nova_compute[192512]: 2025-10-13 15:51:04.459 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 11:51:05 np0005485008 nova_compute[192512]: 2025-10-13 15:51:05.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:51:05 np0005485008 podman[202884]: time="2025-10-13T15:51:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:51:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:51:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20779 "" "Go-http-client/1.1"
Oct 13 11:51:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:51:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3456 "" "Go-http-client/1.1"
Oct 13 11:51:06 np0005485008 nova_compute[192512]: 2025-10-13 15:51:06.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:06 np0005485008 nova_compute[192512]: 2025-10-13 15:51:06.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:51:07 np0005485008 podman[218164]: 2025-10-13 15:51:07.758440526 +0000 UTC m=+0.063332628 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 11:51:07 np0005485008 podman[218166]: 2025-10-13 15:51:07.760742199 +0000 UTC m=+0.058459227 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 11:51:07 np0005485008 podman[218167]: 2025-10-13 15:51:07.774062654 +0000 UTC m=+0.065433544 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 11:51:07 np0005485008 podman[218165]: 2025-10-13 15:51:07.794371179 +0000 UTC m=+0.094576475 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 11:51:07 np0005485008 podman[218173]: 2025-10-13 15:51:07.804432063 +0000 UTC m=+0.095203464 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 13 11:51:08 np0005485008 nova_compute[192512]: 2025-10-13 15:51:08.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:51:08 np0005485008 nova_compute[192512]: 2025-10-13 15:51:08.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:51:09 np0005485008 nova_compute[192512]: 2025-10-13 15:51:09.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:09 np0005485008 nova_compute[192512]: 2025-10-13 15:51:09.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:51:11 np0005485008 nova_compute[192512]: 2025-10-13 15:51:11.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:12 np0005485008 nova_compute[192512]: 2025-10-13 15:51:12.844 2 DEBUG nova.virt.libvirt.driver [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Creating tmpfile /var/lib/nova/instances/tmpsqs_awou to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Oct 13 11:51:12 np0005485008 nova_compute[192512]: 2025-10-13 15:51:12.845 2 DEBUG nova.compute.manager [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpsqs_awou',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Oct 13 11:51:13 np0005485008 nova_compute[192512]: 2025-10-13 15:51:13.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:51:13 np0005485008 nova_compute[192512]: 2025-10-13 15:51:13.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 11:51:13 np0005485008 nova_compute[192512]: 2025-10-13 15:51:13.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 11:51:13 np0005485008 nova_compute[192512]: 2025-10-13 15:51:13.663 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "refresh_cache-c6130d7f-0df2-4945-80e2-7287f84b9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:51:13 np0005485008 nova_compute[192512]: 2025-10-13 15:51:13.664 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquired lock "refresh_cache-c6130d7f-0df2-4945-80e2-7287f84b9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:51:13 np0005485008 nova_compute[192512]: 2025-10-13 15:51:13.664 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 13 11:51:13 np0005485008 nova_compute[192512]: 2025-10-13 15:51:13.665 2 DEBUG nova.objects.instance [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c6130d7f-0df2-4945-80e2-7287f84b9cdc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:51:14 np0005485008 nova_compute[192512]: 2025-10-13 15:51:14.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:14 np0005485008 nova_compute[192512]: 2025-10-13 15:51:14.078 2 DEBUG nova.compute.manager [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpsqs_awou',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='5ce49159-edcc-4a32-b0b0-879ff6d6eb57',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Oct 13 11:51:14 np0005485008 nova_compute[192512]: 2025-10-13 15:51:14.113 2 DEBUG oslo_concurrency.lockutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-5ce49159-edcc-4a32-b0b0-879ff6d6eb57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:51:14 np0005485008 nova_compute[192512]: 2025-10-13 15:51:14.113 2 DEBUG oslo_concurrency.lockutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-5ce49159-edcc-4a32-b0b0-879ff6d6eb57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:51:14 np0005485008 nova_compute[192512]: 2025-10-13 15:51:14.114 2 DEBUG nova.network.neutron [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.059 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Updating instance_info_cache with network_info: [{"id": "7e3dfdb2-c70c-4729-8016-04ff5e6672d1", "address": "fa:16:3e:cc:58:8a", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3dfdb2-c7", "ovs_interfaceid": "7e3dfdb2-c70c-4729-8016-04ff5e6672d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.086 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Releasing lock "refresh_cache-c6130d7f-0df2-4945-80e2-7287f84b9cdc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.087 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.087 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.087 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.108 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.108 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.109 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.109 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.174 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c6130d7f-0df2-4945-80e2-7287f84b9cdc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:51:15 np0005485008 ovn_controller[94758]: 2025-10-13T15:51:15Z|00116|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.254 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c6130d7f-0df2-4945-80e2-7287f84b9cdc/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.257 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c6130d7f-0df2-4945-80e2-7287f84b9cdc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.333 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c6130d7f-0df2-4945-80e2-7287f84b9cdc/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.508 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.509 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5697MB free_disk=73.44089126586914GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.509 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.509 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.570 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Migration for instance 5ce49159-edcc-4a32-b0b0-879ff6d6eb57 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.607 2 INFO nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Updating resource usage from migration 2ef47667-74b7-4ef7-9153-b772cc6ec9e5#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.607 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Starting to track incoming migration 2ef47667-74b7-4ef7-9153-b772cc6ec9e5 with flavor ddff5c88-cac9-460e-8ffb-1e9b9a7c2a59 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.651 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance c6130d7f-0df2-4945-80e2-7287f84b9cdc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.675 2 WARNING nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance 5ce49159-edcc-4a32-b0b0-879ff6d6eb57 has been moved to another host compute-0.ctlplane.example.com(compute-0.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.675 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.675 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.738 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.755 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.781 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.781 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.835 2 DEBUG nova.network.neutron [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Updating instance_info_cache with network_info: [{"id": "29174123-7ab9-4010-a3de-662a5a9ac63c", "address": "fa:16:3e:6f:46:27", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29174123-7a", "ovs_interfaceid": "29174123-7ab9-4010-a3de-662a5a9ac63c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.853 2 DEBUG oslo_concurrency.lockutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-5ce49159-edcc-4a32-b0b0-879ff6d6eb57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.855 2 DEBUG nova.virt.libvirt.driver [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpsqs_awou',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='5ce49159-edcc-4a32-b0b0-879ff6d6eb57',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.856 2 DEBUG nova.virt.libvirt.driver [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Creating instance directory: /var/lib/nova/instances/5ce49159-edcc-4a32-b0b0-879ff6d6eb57 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.856 2 DEBUG nova.virt.libvirt.driver [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Creating disk.info with the contents: {'/var/lib/nova/instances/5ce49159-edcc-4a32-b0b0-879ff6d6eb57/disk': 'qcow2', '/var/lib/nova/instances/5ce49159-edcc-4a32-b0b0-879ff6d6eb57/disk.config': 'raw'} pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10854#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.856 2 DEBUG nova.virt.libvirt.driver [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10864#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.857 2 DEBUG nova.objects.instance [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5ce49159-edcc-4a32-b0b0-879ff6d6eb57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.900 2 DEBUG oslo_concurrency.processutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.967 2 DEBUG oslo_concurrency.processutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.968 2 DEBUG oslo_concurrency.lockutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.968 2 DEBUG oslo_concurrency.lockutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:51:15 np0005485008 nova_compute[192512]: 2025-10-13 15:51:15.979 2 DEBUG oslo_concurrency.processutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.039 2 DEBUG oslo_concurrency.processutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.040 2 DEBUG oslo_concurrency.processutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/5ce49159-edcc-4a32-b0b0-879ff6d6eb57/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.079 2 DEBUG oslo_concurrency.processutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/5ce49159-edcc-4a32-b0b0-879ff6d6eb57/disk 1073741824" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.080 2 DEBUG oslo_concurrency.lockutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.081 2 DEBUG oslo_concurrency.processutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.143 2 DEBUG oslo_concurrency.processutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.144 2 DEBUG nova.virt.disk.api [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Checking if we can resize image /var/lib/nova/instances/5ce49159-edcc-4a32-b0b0-879ff6d6eb57/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.145 2 DEBUG oslo_concurrency.processutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5ce49159-edcc-4a32-b0b0-879ff6d6eb57/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.208 2 DEBUG oslo_concurrency.processutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/5ce49159-edcc-4a32-b0b0-879ff6d6eb57/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.209 2 DEBUG nova.virt.disk.api [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Cannot resize image /var/lib/nova/instances/5ce49159-edcc-4a32-b0b0-879ff6d6eb57/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.210 2 DEBUG nova.objects.instance [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'migration_context' on Instance uuid 5ce49159-edcc-4a32-b0b0-879ff6d6eb57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.246 2 DEBUG oslo_concurrency.processutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/5ce49159-edcc-4a32-b0b0-879ff6d6eb57/disk.config 485376 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.274 2 DEBUG oslo_concurrency.processutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/5ce49159-edcc-4a32-b0b0-879ff6d6eb57/disk.config 485376" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.277 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Copying file compute-0.ctlplane.example.com:/var/lib/nova/instances/5ce49159-edcc-4a32-b0b0-879ff6d6eb57/disk.config to /var/lib/nova/instances/5ce49159-edcc-4a32-b0b0-879ff6d6eb57 copy_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:103#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.277 2 DEBUG oslo_concurrency.processutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/5ce49159-edcc-4a32-b0b0-879ff6d6eb57/disk.config /var/lib/nova/instances/5ce49159-edcc-4a32-b0b0-879ff6d6eb57 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.710 2 DEBUG oslo_concurrency.processutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/5ce49159-edcc-4a32-b0b0-879ff6d6eb57/disk.config /var/lib/nova/instances/5ce49159-edcc-4a32-b0b0-879ff6d6eb57" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.711 2 DEBUG nova.virt.libvirt.driver [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.712 2 DEBUG nova.virt.libvirt.vif [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T15:50:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-598859922',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-598859922',id=9,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T15:50:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4d42867337c94506bc652a0e84c5f849',ramdisk_id='',reservation_id='r-y3kj2kcd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-870116793',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-870116793-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:50:32Z,user_data=None,user_id='c560a06879cb4d4a861db9e49a3f22ee',uuid=5ce49159-edcc-4a32-b0b0-879ff6d6eb57,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "29174123-7ab9-4010-a3de-662a5a9ac63c", "address": "fa:16:3e:6f:46:27", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap29174123-7a", "ovs_interfaceid": "29174123-7ab9-4010-a3de-662a5a9ac63c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.713 2 DEBUG nova.network.os_vif_util [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converting VIF {"id": "29174123-7ab9-4010-a3de-662a5a9ac63c", "address": "fa:16:3e:6f:46:27", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap29174123-7a", "ovs_interfaceid": "29174123-7ab9-4010-a3de-662a5a9ac63c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.714 2 DEBUG nova.network.os_vif_util [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:46:27,bridge_name='br-int',has_traffic_filtering=True,id=29174123-7ab9-4010-a3de-662a5a9ac63c,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29174123-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.714 2 DEBUG os_vif [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:46:27,bridge_name='br-int',has_traffic_filtering=True,id=29174123-7ab9-4010-a3de-662a5a9ac63c,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29174123-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.717 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.717 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.720 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29174123-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.720 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap29174123-7a, col_values=(('external_ids', {'iface-id': '29174123-7ab9-4010-a3de-662a5a9ac63c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6f:46:27', 'vm-uuid': '5ce49159-edcc-4a32-b0b0-879ff6d6eb57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:16 np0005485008 NetworkManager[51587]: <info>  [1760370676.7232] manager: (tap29174123-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.731 2 INFO os_vif [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:46:27,bridge_name='br-int',has_traffic_filtering=True,id=29174123-7ab9-4010-a3de-662a5a9ac63c,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29174123-7a')#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.732 2 DEBUG nova.virt.libvirt.driver [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Oct 13 11:51:16 np0005485008 nova_compute[192512]: 2025-10-13 15:51:16.732 2 DEBUG nova.compute.manager [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpsqs_awou',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='5ce49159-edcc-4a32-b0b0-879ff6d6eb57',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Oct 13 11:51:17 np0005485008 nova_compute[192512]: 2025-10-13 15:51:17.778 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:51:18 np0005485008 nova_compute[192512]: 2025-10-13 15:51:18.538 2 DEBUG nova.network.neutron [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Port 29174123-7ab9-4010-a3de-662a5a9ac63c updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Oct 13 11:51:18 np0005485008 nova_compute[192512]: 2025-10-13 15:51:18.540 2 DEBUG nova.compute.manager [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpsqs_awou',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='5ce49159-edcc-4a32-b0b0-879ff6d6eb57',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Oct 13 11:51:18 np0005485008 systemd[1]: Starting libvirt proxy daemon...
Oct 13 11:51:18 np0005485008 systemd[1]: Started libvirt proxy daemon.
Oct 13 11:51:18 np0005485008 kernel: tap29174123-7a: entered promiscuous mode
Oct 13 11:51:18 np0005485008 NetworkManager[51587]: <info>  [1760370678.8283] manager: (tap29174123-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Oct 13 11:51:18 np0005485008 ovn_controller[94758]: 2025-10-13T15:51:18Z|00117|binding|INFO|Claiming lport 29174123-7ab9-4010-a3de-662a5a9ac63c for this additional chassis.
Oct 13 11:51:18 np0005485008 ovn_controller[94758]: 2025-10-13T15:51:18Z|00118|binding|INFO|29174123-7ab9-4010-a3de-662a5a9ac63c: Claiming fa:16:3e:6f:46:27 10.100.0.12
Oct 13 11:51:18 np0005485008 nova_compute[192512]: 2025-10-13 15:51:18.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:18 np0005485008 ovn_controller[94758]: 2025-10-13T15:51:18Z|00119|binding|INFO|Setting lport 29174123-7ab9-4010-a3de-662a5a9ac63c ovn-installed in OVS
Oct 13 11:51:18 np0005485008 nova_compute[192512]: 2025-10-13 15:51:18.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:18 np0005485008 nova_compute[192512]: 2025-10-13 15:51:18.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:18 np0005485008 systemd-udevd[218328]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:51:18 np0005485008 systemd-machined[152551]: New machine qemu-9-instance-00000009.
Oct 13 11:51:18 np0005485008 NetworkManager[51587]: <info>  [1760370678.8775] device (tap29174123-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 11:51:18 np0005485008 NetworkManager[51587]: <info>  [1760370678.8782] device (tap29174123-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 11:51:18 np0005485008 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Oct 13 11:51:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:51:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:51:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:51:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:51:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:51:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:51:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:51:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:51:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:51:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:51:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:51:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:51:19 np0005485008 nova_compute[192512]: 2025-10-13 15:51:19.711 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370679.7105587, 5ce49159-edcc-4a32-b0b0-879ff6d6eb57 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:51:19 np0005485008 nova_compute[192512]: 2025-10-13 15:51:19.711 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] VM Started (Lifecycle Event)#033[00m
Oct 13 11:51:19 np0005485008 nova_compute[192512]: 2025-10-13 15:51:19.752 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:51:20 np0005485008 nova_compute[192512]: 2025-10-13 15:51:20.387 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370680.3874404, 5ce49159-edcc-4a32-b0b0-879ff6d6eb57 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:51:20 np0005485008 nova_compute[192512]: 2025-10-13 15:51:20.388 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] VM Resumed (Lifecycle Event)#033[00m
Oct 13 11:51:20 np0005485008 nova_compute[192512]: 2025-10-13 15:51:20.419 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:51:20 np0005485008 nova_compute[192512]: 2025-10-13 15:51:20.424 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 11:51:20 np0005485008 nova_compute[192512]: 2025-10-13 15:51:20.471 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct 13 11:51:21 np0005485008 nova_compute[192512]: 2025-10-13 15:51:21.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:21 np0005485008 ovn_controller[94758]: 2025-10-13T15:51:21Z|00120|binding|INFO|Claiming lport 29174123-7ab9-4010-a3de-662a5a9ac63c for this chassis.
Oct 13 11:51:21 np0005485008 ovn_controller[94758]: 2025-10-13T15:51:21Z|00121|binding|INFO|29174123-7ab9-4010-a3de-662a5a9ac63c: Claiming fa:16:3e:6f:46:27 10.100.0.12
Oct 13 11:51:21 np0005485008 ovn_controller[94758]: 2025-10-13T15:51:21Z|00122|binding|INFO|Setting lport 29174123-7ab9-4010-a3de-662a5a9ac63c up in Southbound
Oct 13 11:51:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:21.565 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:46:27 10.100.0.12'], port_security=['fa:16:3e:6f:46:27 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5ce49159-edcc-4a32-b0b0-879ff6d6eb57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d42867337c94506bc652a0e84c5f849', 'neutron:revision_number': '11', 'neutron:security_group_ids': '58c21b61-d3dc-4c91-8143-e19c227ff89f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97a3a12d-8a89-43e2-9a6d-7c3bdc21ad02, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=29174123-7ab9-4010-a3de-662a5a9ac63c) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:51:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:21.568 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 29174123-7ab9-4010-a3de-662a5a9ac63c in datapath ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6 bound to our chassis#033[00m
Oct 13 11:51:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:21.570 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6#033[00m
Oct 13 11:51:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:21.588 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[41247c4c-50db-4bb3-99b4-33977c84f198]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:51:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:21.615 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[7c81cd51-e5df-4aba-8e3b-e6f9df8064bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:51:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:21.623 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[e5643f8c-5178-498e-ac0d-42d197e4f700]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:51:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:21.654 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[02dfc1e7-e3f1-4d2a-921e-fa05e2ecae13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:51:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:21.677 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[292d4865-5a25-4449-88eb-05daa90dc8bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped6faec0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:53:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 15, 'tx_packets': 5, 'rx_bytes': 1126, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 15, 'tx_packets': 5, 'rx_bytes': 1126, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418268, 'reachable_time': 24152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 218365, 'error': None, 'target': 'ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:51:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:21.699 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[99b5a834-e38c-41f6-8a80-b5a8cce1476d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'taped6faec0-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 418281, 'tstamp': 418281}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 218367, 'error': None, 'target': 'ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'taped6faec0-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 418284, 'tstamp': 418284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 218367, 'error': None, 'target': 'ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:51:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:21.701 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped6faec0-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:51:21 np0005485008 nova_compute[192512]: 2025-10-13 15:51:21.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:21 np0005485008 nova_compute[192512]: 2025-10-13 15:51:21.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:21.704 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped6faec0-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:51:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:21.705 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:51:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:21.705 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped6faec0-80, col_values=(('external_ids', {'iface-id': '6f3de1b2-b216-4652-b39f-5d38c68f9bbc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:51:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:21.706 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:51:21 np0005485008 nova_compute[192512]: 2025-10-13 15:51:21.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:21 np0005485008 nova_compute[192512]: 2025-10-13 15:51:21.763 2 INFO nova.compute.manager [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Post operation of migration started#033[00m
Oct 13 11:51:21 np0005485008 podman[218366]: 2025-10-13 15:51:21.772573279 +0000 UTC m=+0.068473880 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9)
Oct 13 11:51:22 np0005485008 nova_compute[192512]: 2025-10-13 15:51:22.729 2 DEBUG oslo_concurrency.lockutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-5ce49159-edcc-4a32-b0b0-879ff6d6eb57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:51:22 np0005485008 nova_compute[192512]: 2025-10-13 15:51:22.729 2 DEBUG oslo_concurrency.lockutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-5ce49159-edcc-4a32-b0b0-879ff6d6eb57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:51:22 np0005485008 nova_compute[192512]: 2025-10-13 15:51:22.729 2 DEBUG nova.network.neutron [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 11:51:24 np0005485008 nova_compute[192512]: 2025-10-13 15:51:24.740 2 DEBUG nova.network.neutron [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Updating instance_info_cache with network_info: [{"id": "29174123-7ab9-4010-a3de-662a5a9ac63c", "address": "fa:16:3e:6f:46:27", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29174123-7a", "ovs_interfaceid": "29174123-7ab9-4010-a3de-662a5a9ac63c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:51:24 np0005485008 nova_compute[192512]: 2025-10-13 15:51:24.796 2 DEBUG oslo_concurrency.lockutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-5ce49159-edcc-4a32-b0b0-879ff6d6eb57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:51:24 np0005485008 nova_compute[192512]: 2025-10-13 15:51:24.814 2 DEBUG oslo_concurrency.lockutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:51:24 np0005485008 nova_compute[192512]: 2025-10-13 15:51:24.815 2 DEBUG oslo_concurrency.lockutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:51:24 np0005485008 nova_compute[192512]: 2025-10-13 15:51:24.815 2 DEBUG oslo_concurrency.lockutils [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:51:24 np0005485008 nova_compute[192512]: 2025-10-13 15:51:24.820 2 INFO nova.virt.libvirt.driver [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Oct 13 11:51:24 np0005485008 virtqemud[192082]: Domain id=9 name='instance-00000009' uuid=5ce49159-edcc-4a32-b0b0-879ff6d6eb57 is tainted: custom-monitor
Oct 13 11:51:25 np0005485008 nova_compute[192512]: 2025-10-13 15:51:25.827 2 INFO nova.virt.libvirt.driver [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Oct 13 11:51:26 np0005485008 nova_compute[192512]: 2025-10-13 15:51:26.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:26 np0005485008 nova_compute[192512]: 2025-10-13 15:51:26.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:26 np0005485008 nova_compute[192512]: 2025-10-13 15:51:26.836 2 INFO nova.virt.libvirt.driver [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Oct 13 11:51:26 np0005485008 nova_compute[192512]: 2025-10-13 15:51:26.843 2 DEBUG nova.compute.manager [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:51:26 np0005485008 nova_compute[192512]: 2025-10-13 15:51:26.872 2 DEBUG nova.objects.instance [None req-374b7bf8-fd06-401f-a736-a6e5c7fc34e0 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 13 11:51:31 np0005485008 nova_compute[192512]: 2025-10-13 15:51:31.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:31 np0005485008 nova_compute[192512]: 2025-10-13 15:51:31.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:31 np0005485008 nova_compute[192512]: 2025-10-13 15:51:31.939 2 DEBUG oslo_concurrency.lockutils [None req-893adf52-3c63-4fb2-8f7b-8bd0dfba286d c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "c6130d7f-0df2-4945-80e2-7287f84b9cdc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:51:31 np0005485008 nova_compute[192512]: 2025-10-13 15:51:31.940 2 DEBUG oslo_concurrency.lockutils [None req-893adf52-3c63-4fb2-8f7b-8bd0dfba286d c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "c6130d7f-0df2-4945-80e2-7287f84b9cdc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:51:31 np0005485008 nova_compute[192512]: 2025-10-13 15:51:31.941 2 DEBUG oslo_concurrency.lockutils [None req-893adf52-3c63-4fb2-8f7b-8bd0dfba286d c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "c6130d7f-0df2-4945-80e2-7287f84b9cdc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:51:31 np0005485008 nova_compute[192512]: 2025-10-13 15:51:31.941 2 DEBUG oslo_concurrency.lockutils [None req-893adf52-3c63-4fb2-8f7b-8bd0dfba286d c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "c6130d7f-0df2-4945-80e2-7287f84b9cdc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:51:31 np0005485008 nova_compute[192512]: 2025-10-13 15:51:31.942 2 DEBUG oslo_concurrency.lockutils [None req-893adf52-3c63-4fb2-8f7b-8bd0dfba286d c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "c6130d7f-0df2-4945-80e2-7287f84b9cdc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:51:31 np0005485008 nova_compute[192512]: 2025-10-13 15:51:31.944 2 INFO nova.compute.manager [None req-893adf52-3c63-4fb2-8f7b-8bd0dfba286d c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Terminating instance#033[00m
Oct 13 11:51:31 np0005485008 nova_compute[192512]: 2025-10-13 15:51:31.945 2 DEBUG nova.compute.manager [None req-893adf52-3c63-4fb2-8f7b-8bd0dfba286d c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 13 11:51:31 np0005485008 kernel: tap7e3dfdb2-c7 (unregistering): left promiscuous mode
Oct 13 11:51:31 np0005485008 NetworkManager[51587]: <info>  [1760370691.9764] device (tap7e3dfdb2-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 11:51:31 np0005485008 nova_compute[192512]: 2025-10-13 15:51:31.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:31 np0005485008 ovn_controller[94758]: 2025-10-13T15:51:31Z|00123|binding|INFO|Releasing lport 7e3dfdb2-c70c-4729-8016-04ff5e6672d1 from this chassis (sb_readonly=0)
Oct 13 11:51:31 np0005485008 ovn_controller[94758]: 2025-10-13T15:51:31Z|00124|binding|INFO|Setting lport 7e3dfdb2-c70c-4729-8016-04ff5e6672d1 down in Southbound
Oct 13 11:51:31 np0005485008 ovn_controller[94758]: 2025-10-13T15:51:31Z|00125|binding|INFO|Removing iface tap7e3dfdb2-c7 ovn-installed in OVS
Oct 13 11:51:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:31.995 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:58:8a 10.100.0.6'], port_security=['fa:16:3e:cc:58:8a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c6130d7f-0df2-4945-80e2-7287f84b9cdc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d42867337c94506bc652a0e84c5f849', 'neutron:revision_number': '4', 'neutron:security_group_ids': '58c21b61-d3dc-4c91-8143-e19c227ff89f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97a3a12d-8a89-43e2-9a6d-7c3bdc21ad02, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=7e3dfdb2-c70c-4729-8016-04ff5e6672d1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:51:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:31.997 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 7e3dfdb2-c70c-4729-8016-04ff5e6672d1 in datapath ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6 unbound from our chassis#033[00m
Oct 13 11:51:32 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:31.999 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:32 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:32.018 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[2267baff-4c5d-43d1-97e6-f1b863f8a12d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:51:32 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:32.055 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[cd74d4f6-a659-48cb-9b51-be906cacae73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:51:32 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:32.060 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[7bebb1bc-f5c3-4760-8074-7f1c3ab68685]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:51:32 np0005485008 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct 13 11:51:32 np0005485008 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d0000000a.scope: Consumed 13.510s CPU time.
Oct 13 11:51:32 np0005485008 systemd-machined[152551]: Machine qemu-8-instance-0000000a terminated.
Oct 13 11:51:32 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:32.092 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c8e91c-48bf-4aaa-815c-29acef8d5be5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:51:32 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:32.111 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[69c4d26e-515e-4bdf-bcbd-1abc125de680]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped6faec0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:53:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418268, 'reachable_time': 24152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 218400, 'error': None, 'target': 'ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:51:32 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:32.129 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[8ee3f53d-9106-42b6-bbfe-e8000aff821e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'taped6faec0-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 418281, 'tstamp': 418281}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 218401, 'error': None, 'target': 'ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'taped6faec0-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 418284, 'tstamp': 418284}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 218401, 'error': None, 'target': 'ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:51:32 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:32.131 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped6faec0-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:32 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:32.141 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped6faec0-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:51:32 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:32.142 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:51:32 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:32.142 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped6faec0-80, col_values=(('external_ids', {'iface-id': '6f3de1b2-b216-4652-b39f-5d38c68f9bbc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:51:32 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:32.142 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.217 2 INFO nova.virt.libvirt.driver [-] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Instance destroyed successfully.#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.217 2 DEBUG nova.objects.instance [None req-893adf52-3c63-4fb2-8f7b-8bd0dfba286d c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lazy-loading 'resources' on Instance uuid c6130d7f-0df2-4945-80e2-7287f84b9cdc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.228 2 DEBUG nova.virt.libvirt.vif [None req-893adf52-3c63-4fb2-8f7b-8bd0dfba286d c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T15:50:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-2138052222',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-2138052222',id=10,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T15:50:45Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4d42867337c94506bc652a0e84c5f849',ramdisk_id='',reservation_id='r-p1hbh7th',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-870116793',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-870116793-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T15:50:45Z,user_data=None,user_id='c560a06879cb4d4a861db9e49a3f22ee',uuid=c6130d7f-0df2-4945-80e2-7287f84b9cdc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7e3dfdb2-c70c-4729-8016-04ff5e6672d1", "address": "fa:16:3e:cc:58:8a", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3dfdb2-c7", "ovs_interfaceid": "7e3dfdb2-c70c-4729-8016-04ff5e6672d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.228 2 DEBUG nova.network.os_vif_util [None req-893adf52-3c63-4fb2-8f7b-8bd0dfba286d c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Converting VIF {"id": "7e3dfdb2-c70c-4729-8016-04ff5e6672d1", "address": "fa:16:3e:cc:58:8a", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3dfdb2-c7", "ovs_interfaceid": "7e3dfdb2-c70c-4729-8016-04ff5e6672d1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.229 2 DEBUG nova.network.os_vif_util [None req-893adf52-3c63-4fb2-8f7b-8bd0dfba286d c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cc:58:8a,bridge_name='br-int',has_traffic_filtering=True,id=7e3dfdb2-c70c-4729-8016-04ff5e6672d1,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e3dfdb2-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.229 2 DEBUG os_vif [None req-893adf52-3c63-4fb2-8f7b-8bd0dfba286d c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cc:58:8a,bridge_name='br-int',has_traffic_filtering=True,id=7e3dfdb2-c70c-4729-8016-04ff5e6672d1,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e3dfdb2-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.232 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e3dfdb2-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.242 2 INFO os_vif [None req-893adf52-3c63-4fb2-8f7b-8bd0dfba286d c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cc:58:8a,bridge_name='br-int',has_traffic_filtering=True,id=7e3dfdb2-c70c-4729-8016-04ff5e6672d1,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e3dfdb2-c7')#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.243 2 INFO nova.virt.libvirt.driver [None req-893adf52-3c63-4fb2-8f7b-8bd0dfba286d c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Deleting instance files /var/lib/nova/instances/c6130d7f-0df2-4945-80e2-7287f84b9cdc_del#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.243 2 INFO nova.virt.libvirt.driver [None req-893adf52-3c63-4fb2-8f7b-8bd0dfba286d c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Deletion of /var/lib/nova/instances/c6130d7f-0df2-4945-80e2-7287f84b9cdc_del complete#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.288 2 INFO nova.compute.manager [None req-893adf52-3c63-4fb2-8f7b-8bd0dfba286d c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Took 0.34 seconds to destroy the instance on the hypervisor.#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.289 2 DEBUG oslo.service.loopingcall [None req-893adf52-3c63-4fb2-8f7b-8bd0dfba286d c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.289 2 DEBUG nova.compute.manager [-] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.289 2 DEBUG nova.network.neutron [-] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.725 2 DEBUG nova.compute.manager [req-5c3a3ad3-dbac-48ac-aab8-16ee26b61268 req-0dfdba4c-86f4-43cd-83b5-c0a32e11aa81 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Received event network-vif-unplugged-7e3dfdb2-c70c-4729-8016-04ff5e6672d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.726 2 DEBUG oslo_concurrency.lockutils [req-5c3a3ad3-dbac-48ac-aab8-16ee26b61268 req-0dfdba4c-86f4-43cd-83b5-c0a32e11aa81 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "c6130d7f-0df2-4945-80e2-7287f84b9cdc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.726 2 DEBUG oslo_concurrency.lockutils [req-5c3a3ad3-dbac-48ac-aab8-16ee26b61268 req-0dfdba4c-86f4-43cd-83b5-c0a32e11aa81 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "c6130d7f-0df2-4945-80e2-7287f84b9cdc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.727 2 DEBUG oslo_concurrency.lockutils [req-5c3a3ad3-dbac-48ac-aab8-16ee26b61268 req-0dfdba4c-86f4-43cd-83b5-c0a32e11aa81 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "c6130d7f-0df2-4945-80e2-7287f84b9cdc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.727 2 DEBUG nova.compute.manager [req-5c3a3ad3-dbac-48ac-aab8-16ee26b61268 req-0dfdba4c-86f4-43cd-83b5-c0a32e11aa81 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] No waiting events found dispatching network-vif-unplugged-7e3dfdb2-c70c-4729-8016-04ff5e6672d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.727 2 DEBUG nova.compute.manager [req-5c3a3ad3-dbac-48ac-aab8-16ee26b61268 req-0dfdba4c-86f4-43cd-83b5-c0a32e11aa81 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Received event network-vif-unplugged-7e3dfdb2-c70c-4729-8016-04ff5e6672d1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 11:51:32 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:32.838 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:51:32 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:32.839 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.880 2 DEBUG nova.network.neutron [-] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.896 2 INFO nova.compute.manager [-] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Took 0.61 seconds to deallocate network for instance.#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.948 2 DEBUG oslo_concurrency.lockutils [None req-893adf52-3c63-4fb2-8f7b-8bd0dfba286d c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.949 2 DEBUG oslo_concurrency.lockutils [None req-893adf52-3c63-4fb2-8f7b-8bd0dfba286d c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:51:32 np0005485008 nova_compute[192512]: 2025-10-13 15:51:32.963 2 DEBUG nova.compute.manager [req-9dabfce1-d5e1-47a8-8ef5-4abca2a81f40 req-e4cd2691-8843-440f-be8c-5b2909a6b622 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Received event network-vif-deleted-7e3dfdb2-c70c-4729-8016-04ff5e6672d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:51:33 np0005485008 nova_compute[192512]: 2025-10-13 15:51:33.020 2 DEBUG nova.compute.provider_tree [None req-893adf52-3c63-4fb2-8f7b-8bd0dfba286d c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:51:33 np0005485008 nova_compute[192512]: 2025-10-13 15:51:33.035 2 DEBUG nova.scheduler.client.report [None req-893adf52-3c63-4fb2-8f7b-8bd0dfba286d c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:51:33 np0005485008 nova_compute[192512]: 2025-10-13 15:51:33.058 2 DEBUG oslo_concurrency.lockutils [None req-893adf52-3c63-4fb2-8f7b-8bd0dfba286d c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:51:33 np0005485008 nova_compute[192512]: 2025-10-13 15:51:33.088 2 INFO nova.scheduler.client.report [None req-893adf52-3c63-4fb2-8f7b-8bd0dfba286d c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Deleted allocations for instance c6130d7f-0df2-4945-80e2-7287f84b9cdc#033[00m
Oct 13 11:51:33 np0005485008 nova_compute[192512]: 2025-10-13 15:51:33.156 2 DEBUG oslo_concurrency.lockutils [None req-893adf52-3c63-4fb2-8f7b-8bd0dfba286d c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "c6130d7f-0df2-4945-80e2-7287f84b9cdc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:51:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:33.954 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:51:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:33.955 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:51:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:33.956 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:51:33 np0005485008 nova_compute[192512]: 2025-10-13 15:51:33.991 2 DEBUG oslo_concurrency.lockutils [None req-45ced6ad-f8de-4c64-b7f0-dc26a962b4e2 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "5ce49159-edcc-4a32-b0b0-879ff6d6eb57" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:51:33 np0005485008 nova_compute[192512]: 2025-10-13 15:51:33.992 2 DEBUG oslo_concurrency.lockutils [None req-45ced6ad-f8de-4c64-b7f0-dc26a962b4e2 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "5ce49159-edcc-4a32-b0b0-879ff6d6eb57" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:51:33 np0005485008 nova_compute[192512]: 2025-10-13 15:51:33.992 2 DEBUG oslo_concurrency.lockutils [None req-45ced6ad-f8de-4c64-b7f0-dc26a962b4e2 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "5ce49159-edcc-4a32-b0b0-879ff6d6eb57-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:51:33 np0005485008 nova_compute[192512]: 2025-10-13 15:51:33.993 2 DEBUG oslo_concurrency.lockutils [None req-45ced6ad-f8de-4c64-b7f0-dc26a962b4e2 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "5ce49159-edcc-4a32-b0b0-879ff6d6eb57-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:51:33 np0005485008 nova_compute[192512]: 2025-10-13 15:51:33.993 2 DEBUG oslo_concurrency.lockutils [None req-45ced6ad-f8de-4c64-b7f0-dc26a962b4e2 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "5ce49159-edcc-4a32-b0b0-879ff6d6eb57-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:51:33 np0005485008 nova_compute[192512]: 2025-10-13 15:51:33.995 2 INFO nova.compute.manager [None req-45ced6ad-f8de-4c64-b7f0-dc26a962b4e2 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Terminating instance#033[00m
Oct 13 11:51:33 np0005485008 nova_compute[192512]: 2025-10-13 15:51:33.997 2 DEBUG nova.compute.manager [None req-45ced6ad-f8de-4c64-b7f0-dc26a962b4e2 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 13 11:51:34 np0005485008 kernel: tap29174123-7a (unregistering): left promiscuous mode
Oct 13 11:51:34 np0005485008 NetworkManager[51587]: <info>  [1760370694.0300] device (tap29174123-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 11:51:34 np0005485008 ovn_controller[94758]: 2025-10-13T15:51:34Z|00126|binding|INFO|Releasing lport 29174123-7ab9-4010-a3de-662a5a9ac63c from this chassis (sb_readonly=0)
Oct 13 11:51:34 np0005485008 ovn_controller[94758]: 2025-10-13T15:51:34Z|00127|binding|INFO|Setting lport 29174123-7ab9-4010-a3de-662a5a9ac63c down in Southbound
Oct 13 11:51:34 np0005485008 ovn_controller[94758]: 2025-10-13T15:51:34Z|00128|binding|INFO|Removing iface tap29174123-7a ovn-installed in OVS
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:34.091 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:46:27 10.100.0.12'], port_security=['fa:16:3e:6f:46:27 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5ce49159-edcc-4a32-b0b0-879ff6d6eb57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d42867337c94506bc652a0e84c5f849', 'neutron:revision_number': '13', 'neutron:security_group_ids': '58c21b61-d3dc-4c91-8143-e19c227ff89f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97a3a12d-8a89-43e2-9a6d-7c3bdc21ad02, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=29174123-7ab9-4010-a3de-662a5a9ac63c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:51:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:34.092 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 29174123-7ab9-4010-a3de-662a5a9ac63c in datapath ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6 unbound from our chassis#033[00m
Oct 13 11:51:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:34.093 103642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 13 11:51:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:34.094 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[ca4d4e45-50c2-421a-a5c1-de5a7006ac08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:51:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:34.095 103642 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6 namespace which is not needed anymore#033[00m
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:34 np0005485008 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Oct 13 11:51:34 np0005485008 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 1.809s CPU time.
Oct 13 11:51:34 np0005485008 systemd-machined[152551]: Machine qemu-9-instance-00000009 terminated.
Oct 13 11:51:34 np0005485008 NetworkManager[51587]: <info>  [1760370694.2172] manager: (tap29174123-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Oct 13 11:51:34 np0005485008 neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6[218114]: [NOTICE]   (218118) : haproxy version is 2.8.14-c23fe91
Oct 13 11:51:34 np0005485008 neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6[218114]: [NOTICE]   (218118) : path to executable is /usr/sbin/haproxy
Oct 13 11:51:34 np0005485008 neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6[218114]: [WARNING]  (218118) : Exiting Master process...
Oct 13 11:51:34 np0005485008 neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6[218114]: [WARNING]  (218118) : Exiting Master process...
Oct 13 11:51:34 np0005485008 neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6[218114]: [ALERT]    (218118) : Current worker (218120) exited with code 143 (Terminated)
Oct 13 11:51:34 np0005485008 neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6[218114]: [WARNING]  (218118) : All workers exited. Exiting... (0)
Oct 13 11:51:34 np0005485008 systemd[1]: libpod-d8888cd9a6391d1f1b2fe2471d560df95f2193ee5bf584f5c1d6c5a44352460e.scope: Deactivated successfully.
Oct 13 11:51:34 np0005485008 podman[218442]: 2025-10-13 15:51:34.254993768 +0000 UTC m=+0.058624721 container died d8888cd9a6391d1f1b2fe2471d560df95f2193ee5bf584f5c1d6c5a44352460e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.262 2 INFO nova.virt.libvirt.driver [-] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Instance destroyed successfully.#033[00m
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.263 2 DEBUG nova.objects.instance [None req-45ced6ad-f8de-4c64-b7f0-dc26a962b4e2 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lazy-loading 'resources' on Instance uuid 5ce49159-edcc-4a32-b0b0-879ff6d6eb57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.276 2 DEBUG nova.virt.libvirt.vif [None req-45ced6ad-f8de-4c64-b7f0-dc26a962b4e2 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-13T15:50:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-598859922',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-598859922',id=9,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T15:50:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4d42867337c94506bc652a0e84c5f849',ramdisk_id='',reservation_id='r-y3kj2kcd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',clean_attempts='1',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-870116793',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-870116793-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T15:51:26Z,user_data=None,user_id='c560a06879cb4d4a861db9e49a3f22ee',uuid=5ce49159-edcc-4a32-b0b0-879ff6d6eb57,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "29174123-7ab9-4010-a3de-662a5a9ac63c", "address": "fa:16:3e:6f:46:27", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29174123-7a", "ovs_interfaceid": "29174123-7ab9-4010-a3de-662a5a9ac63c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.276 2 DEBUG nova.network.os_vif_util [None req-45ced6ad-f8de-4c64-b7f0-dc26a962b4e2 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Converting VIF {"id": "29174123-7ab9-4010-a3de-662a5a9ac63c", "address": "fa:16:3e:6f:46:27", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29174123-7a", "ovs_interfaceid": "29174123-7ab9-4010-a3de-662a5a9ac63c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.277 2 DEBUG nova.network.os_vif_util [None req-45ced6ad-f8de-4c64-b7f0-dc26a962b4e2 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6f:46:27,bridge_name='br-int',has_traffic_filtering=True,id=29174123-7ab9-4010-a3de-662a5a9ac63c,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29174123-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.277 2 DEBUG os_vif [None req-45ced6ad-f8de-4c64-b7f0-dc26a962b4e2 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6f:46:27,bridge_name='br-int',has_traffic_filtering=True,id=29174123-7ab9-4010-a3de-662a5a9ac63c,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29174123-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.280 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29174123-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.287 2 INFO os_vif [None req-45ced6ad-f8de-4c64-b7f0-dc26a962b4e2 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6f:46:27,bridge_name='br-int',has_traffic_filtering=True,id=29174123-7ab9-4010-a3de-662a5a9ac63c,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29174123-7a')#033[00m
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.288 2 INFO nova.virt.libvirt.driver [None req-45ced6ad-f8de-4c64-b7f0-dc26a962b4e2 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Deleting instance files /var/lib/nova/instances/5ce49159-edcc-4a32-b0b0-879ff6d6eb57_del#033[00m
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.288 2 INFO nova.virt.libvirt.driver [None req-45ced6ad-f8de-4c64-b7f0-dc26a962b4e2 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Deletion of /var/lib/nova/instances/5ce49159-edcc-4a32-b0b0-879ff6d6eb57_del complete#033[00m
Oct 13 11:51:34 np0005485008 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d8888cd9a6391d1f1b2fe2471d560df95f2193ee5bf584f5c1d6c5a44352460e-userdata-shm.mount: Deactivated successfully.
Oct 13 11:51:34 np0005485008 systemd[1]: var-lib-containers-storage-overlay-661ab61128b92ed2eb7f9bcb1bf817d52c31e4341e8fae4f6698cbd9bbc4f86f-merged.mount: Deactivated successfully.
Oct 13 11:51:34 np0005485008 podman[218442]: 2025-10-13 15:51:34.312857835 +0000 UTC m=+0.116488788 container cleanup d8888cd9a6391d1f1b2fe2471d560df95f2193ee5bf584f5c1d6c5a44352460e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:51:34 np0005485008 systemd[1]: libpod-conmon-d8888cd9a6391d1f1b2fe2471d560df95f2193ee5bf584f5c1d6c5a44352460e.scope: Deactivated successfully.
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.334 2 INFO nova.compute.manager [None req-45ced6ad-f8de-4c64-b7f0-dc26a962b4e2 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Took 0.34 seconds to destroy the instance on the hypervisor.#033[00m
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.335 2 DEBUG oslo.service.loopingcall [None req-45ced6ad-f8de-4c64-b7f0-dc26a962b4e2 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.336 2 DEBUG nova.compute.manager [-] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.336 2 DEBUG nova.network.neutron [-] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 13 11:51:34 np0005485008 podman[218490]: 2025-10-13 15:51:34.391746889 +0000 UTC m=+0.054621316 container remove d8888cd9a6391d1f1b2fe2471d560df95f2193ee5bf584f5c1d6c5a44352460e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:51:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:34.397 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[27700220-0224-44cc-8b8e-8d7f5fee30ac]: (4, ('Mon Oct 13 03:51:34 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6 (d8888cd9a6391d1f1b2fe2471d560df95f2193ee5bf584f5c1d6c5a44352460e)\nd8888cd9a6391d1f1b2fe2471d560df95f2193ee5bf584f5c1d6c5a44352460e\nMon Oct 13 03:51:34 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6 (d8888cd9a6391d1f1b2fe2471d560df95f2193ee5bf584f5c1d6c5a44352460e)\nd8888cd9a6391d1f1b2fe2471d560df95f2193ee5bf584f5c1d6c5a44352460e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:51:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:34.399 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[91973be1-1548-4591-978f-994791ad4a09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:51:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:34.400 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped6faec0-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:51:34 np0005485008 kernel: taped6faec0-80: left promiscuous mode
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:34.418 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[e15da84a-9fa7-4916-b4cd-64cb76e10338]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:51:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:34.445 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[d31577c2-9a92-44eb-b182-d89d85906667]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:51:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:34.446 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[6f6f6dc2-ae46-4ab0-aba6-5e64375571a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:51:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:34.462 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[04c844bd-9534-4788-a385-fe5c12fcced7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418261, 'reachable_time': 18780, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 218505, 'error': None, 'target': 'ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:51:34 np0005485008 systemd[1]: run-netns-ovnmeta\x2ded6faec0\x2d8362\x2d4a4b\x2da5bb\x2dd4f7a3a596b6.mount: Deactivated successfully.
Oct 13 11:51:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:34.467 103757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 13 11:51:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:34.469 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[88158591-f176-4c61-af28-21ff145851a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.815 2 DEBUG nova.compute.manager [req-f7b044fd-f3ea-4f20-a4db-8fc4306033c3 req-64ce36b2-f636-4c98-a309-3735eaa1a5c1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Received event network-vif-plugged-7e3dfdb2-c70c-4729-8016-04ff5e6672d1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.816 2 DEBUG oslo_concurrency.lockutils [req-f7b044fd-f3ea-4f20-a4db-8fc4306033c3 req-64ce36b2-f636-4c98-a309-3735eaa1a5c1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "c6130d7f-0df2-4945-80e2-7287f84b9cdc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.816 2 DEBUG oslo_concurrency.lockutils [req-f7b044fd-f3ea-4f20-a4db-8fc4306033c3 req-64ce36b2-f636-4c98-a309-3735eaa1a5c1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "c6130d7f-0df2-4945-80e2-7287f84b9cdc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.816 2 DEBUG oslo_concurrency.lockutils [req-f7b044fd-f3ea-4f20-a4db-8fc4306033c3 req-64ce36b2-f636-4c98-a309-3735eaa1a5c1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "c6130d7f-0df2-4945-80e2-7287f84b9cdc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.817 2 DEBUG nova.compute.manager [req-f7b044fd-f3ea-4f20-a4db-8fc4306033c3 req-64ce36b2-f636-4c98-a309-3735eaa1a5c1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] No waiting events found dispatching network-vif-plugged-7e3dfdb2-c70c-4729-8016-04ff5e6672d1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.817 2 WARNING nova.compute.manager [req-f7b044fd-f3ea-4f20-a4db-8fc4306033c3 req-64ce36b2-f636-4c98-a309-3735eaa1a5c1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Received unexpected event network-vif-plugged-7e3dfdb2-c70c-4729-8016-04ff5e6672d1 for instance with vm_state deleted and task_state None.#033[00m
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.960 2 DEBUG nova.network.neutron [-] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:51:34 np0005485008 nova_compute[192512]: 2025-10-13 15:51:34.993 2 INFO nova.compute.manager [-] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Took 0.66 seconds to deallocate network for instance.#033[00m
Oct 13 11:51:35 np0005485008 nova_compute[192512]: 2025-10-13 15:51:35.051 2 DEBUG nova.compute.manager [req-0796aaa4-df33-47ca-a214-5ec8feb42f11 req-3cf77e8b-9d20-43e6-a9ae-d820b31442fc 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Received event network-vif-unplugged-29174123-7ab9-4010-a3de-662a5a9ac63c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:51:35 np0005485008 nova_compute[192512]: 2025-10-13 15:51:35.052 2 DEBUG oslo_concurrency.lockutils [req-0796aaa4-df33-47ca-a214-5ec8feb42f11 req-3cf77e8b-9d20-43e6-a9ae-d820b31442fc 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "5ce49159-edcc-4a32-b0b0-879ff6d6eb57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:51:35 np0005485008 nova_compute[192512]: 2025-10-13 15:51:35.052 2 DEBUG oslo_concurrency.lockutils [req-0796aaa4-df33-47ca-a214-5ec8feb42f11 req-3cf77e8b-9d20-43e6-a9ae-d820b31442fc 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "5ce49159-edcc-4a32-b0b0-879ff6d6eb57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:51:35 np0005485008 nova_compute[192512]: 2025-10-13 15:51:35.052 2 DEBUG oslo_concurrency.lockutils [req-0796aaa4-df33-47ca-a214-5ec8feb42f11 req-3cf77e8b-9d20-43e6-a9ae-d820b31442fc 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "5ce49159-edcc-4a32-b0b0-879ff6d6eb57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:51:35 np0005485008 nova_compute[192512]: 2025-10-13 15:51:35.053 2 DEBUG nova.compute.manager [req-0796aaa4-df33-47ca-a214-5ec8feb42f11 req-3cf77e8b-9d20-43e6-a9ae-d820b31442fc 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] No waiting events found dispatching network-vif-unplugged-29174123-7ab9-4010-a3de-662a5a9ac63c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:51:35 np0005485008 nova_compute[192512]: 2025-10-13 15:51:35.053 2 DEBUG nova.compute.manager [req-0796aaa4-df33-47ca-a214-5ec8feb42f11 req-3cf77e8b-9d20-43e6-a9ae-d820b31442fc 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Received event network-vif-unplugged-29174123-7ab9-4010-a3de-662a5a9ac63c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 11:51:35 np0005485008 nova_compute[192512]: 2025-10-13 15:51:35.053 2 DEBUG nova.compute.manager [req-0796aaa4-df33-47ca-a214-5ec8feb42f11 req-3cf77e8b-9d20-43e6-a9ae-d820b31442fc 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Received event network-vif-plugged-29174123-7ab9-4010-a3de-662a5a9ac63c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:51:35 np0005485008 nova_compute[192512]: 2025-10-13 15:51:35.053 2 DEBUG oslo_concurrency.lockutils [req-0796aaa4-df33-47ca-a214-5ec8feb42f11 req-3cf77e8b-9d20-43e6-a9ae-d820b31442fc 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "5ce49159-edcc-4a32-b0b0-879ff6d6eb57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:51:35 np0005485008 nova_compute[192512]: 2025-10-13 15:51:35.054 2 DEBUG oslo_concurrency.lockutils [req-0796aaa4-df33-47ca-a214-5ec8feb42f11 req-3cf77e8b-9d20-43e6-a9ae-d820b31442fc 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "5ce49159-edcc-4a32-b0b0-879ff6d6eb57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:51:35 np0005485008 nova_compute[192512]: 2025-10-13 15:51:35.054 2 DEBUG oslo_concurrency.lockutils [req-0796aaa4-df33-47ca-a214-5ec8feb42f11 req-3cf77e8b-9d20-43e6-a9ae-d820b31442fc 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "5ce49159-edcc-4a32-b0b0-879ff6d6eb57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:51:35 np0005485008 nova_compute[192512]: 2025-10-13 15:51:35.054 2 DEBUG nova.compute.manager [req-0796aaa4-df33-47ca-a214-5ec8feb42f11 req-3cf77e8b-9d20-43e6-a9ae-d820b31442fc 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] No waiting events found dispatching network-vif-plugged-29174123-7ab9-4010-a3de-662a5a9ac63c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:51:35 np0005485008 nova_compute[192512]: 2025-10-13 15:51:35.054 2 WARNING nova.compute.manager [req-0796aaa4-df33-47ca-a214-5ec8feb42f11 req-3cf77e8b-9d20-43e6-a9ae-d820b31442fc 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Received unexpected event network-vif-plugged-29174123-7ab9-4010-a3de-662a5a9ac63c for instance with vm_state active and task_state deleting.#033[00m
Oct 13 11:51:35 np0005485008 nova_compute[192512]: 2025-10-13 15:51:35.055 2 DEBUG nova.compute.manager [req-0796aaa4-df33-47ca-a214-5ec8feb42f11 req-3cf77e8b-9d20-43e6-a9ae-d820b31442fc 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Received event network-vif-deleted-29174123-7ab9-4010-a3de-662a5a9ac63c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:51:35 np0005485008 nova_compute[192512]: 2025-10-13 15:51:35.057 2 DEBUG oslo_concurrency.lockutils [None req-45ced6ad-f8de-4c64-b7f0-dc26a962b4e2 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:51:35 np0005485008 nova_compute[192512]: 2025-10-13 15:51:35.057 2 DEBUG oslo_concurrency.lockutils [None req-45ced6ad-f8de-4c64-b7f0-dc26a962b4e2 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:51:35 np0005485008 nova_compute[192512]: 2025-10-13 15:51:35.063 2 DEBUG oslo_concurrency.lockutils [None req-45ced6ad-f8de-4c64-b7f0-dc26a962b4e2 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:51:35 np0005485008 nova_compute[192512]: 2025-10-13 15:51:35.091 2 INFO nova.scheduler.client.report [None req-45ced6ad-f8de-4c64-b7f0-dc26a962b4e2 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Deleted allocations for instance 5ce49159-edcc-4a32-b0b0-879ff6d6eb57#033[00m
Oct 13 11:51:35 np0005485008 nova_compute[192512]: 2025-10-13 15:51:35.148 2 DEBUG oslo_concurrency.lockutils [None req-45ced6ad-f8de-4c64-b7f0-dc26a962b4e2 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "5ce49159-edcc-4a32-b0b0-879ff6d6eb57" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:51:35 np0005485008 podman[202884]: time="2025-10-13T15:51:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:51:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:51:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:51:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:51:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2997 "" "Go-http-client/1.1"
Oct 13 11:51:36 np0005485008 nova_compute[192512]: 2025-10-13 15:51:36.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:38 np0005485008 podman[218506]: 2025-10-13 15:51:38.772889994 +0000 UTC m=+0.068160889 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd)
Oct 13 11:51:38 np0005485008 podman[218509]: 2025-10-13 15:51:38.772919014 +0000 UTC m=+0.057527637 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 11:51:38 np0005485008 podman[218508]: 2025-10-13 15:51:38.788366677 +0000 UTC m=+0.072478434 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:51:38 np0005485008 podman[218515]: 2025-10-13 15:51:38.811198599 +0000 UTC m=+0.089913267 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:51:38 np0005485008 podman[218507]: 2025-10-13 15:51:38.823424762 +0000 UTC m=+0.110734489 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid)
Oct 13 11:51:39 np0005485008 nova_compute[192512]: 2025-10-13 15:51:39.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:41 np0005485008 nova_compute[192512]: 2025-10-13 15:51:41.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:41 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:51:41.841 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:51:44 np0005485008 nova_compute[192512]: 2025-10-13 15:51:44.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:46 np0005485008 nova_compute[192512]: 2025-10-13 15:51:46.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:47 np0005485008 nova_compute[192512]: 2025-10-13 15:51:47.215 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760370692.214534, c6130d7f-0df2-4945-80e2-7287f84b9cdc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:51:47 np0005485008 nova_compute[192512]: 2025-10-13 15:51:47.216 2 INFO nova.compute.manager [-] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] VM Stopped (Lifecycle Event)#033[00m
Oct 13 11:51:47 np0005485008 nova_compute[192512]: 2025-10-13 15:51:47.235 2 DEBUG nova.compute.manager [None req-f7bf3639-240c-4518-893f-df5803f61149 - - - - - -] [instance: c6130d7f-0df2-4945-80e2-7287f84b9cdc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:51:49 np0005485008 nova_compute[192512]: 2025-10-13 15:51:49.262 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760370694.261104, 5ce49159-edcc-4a32-b0b0-879ff6d6eb57 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:51:49 np0005485008 nova_compute[192512]: 2025-10-13 15:51:49.263 2 INFO nova.compute.manager [-] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] VM Stopped (Lifecycle Event)#033[00m
Oct 13 11:51:49 np0005485008 nova_compute[192512]: 2025-10-13 15:51:49.284 2 DEBUG nova.compute.manager [None req-98a1a047-8153-4555-96e9-738821e6467d - - - - - -] [instance: 5ce49159-edcc-4a32-b0b0-879ff6d6eb57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:51:49 np0005485008 nova_compute[192512]: 2025-10-13 15:51:49.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:51:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:51:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:51:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:51:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:51:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:51:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:51:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:51:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:51:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:51:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:51:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:51:51 np0005485008 nova_compute[192512]: 2025-10-13 15:51:51.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:52 np0005485008 podman[218611]: 2025-10-13 15:51:52.76000477 +0000 UTC m=+0.067340235 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., architecture=x86_64)
Oct 13 11:51:54 np0005485008 nova_compute[192512]: 2025-10-13 15:51:54.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:56 np0005485008 nova_compute[192512]: 2025-10-13 15:51:56.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:51:59 np0005485008 nova_compute[192512]: 2025-10-13 15:51:59.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:01 np0005485008 nova_compute[192512]: 2025-10-13 15:52:01.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:04 np0005485008 nova_compute[192512]: 2025-10-13 15:52:04.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:04 np0005485008 nova_compute[192512]: 2025-10-13 15:52:04.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:52:04 np0005485008 nova_compute[192512]: 2025-10-13 15:52:04.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 11:52:05 np0005485008 podman[202884]: time="2025-10-13T15:52:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:52:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:52:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:52:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:52:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2993 "" "Go-http-client/1.1"
Oct 13 11:52:06 np0005485008 nova_compute[192512]: 2025-10-13 15:52:06.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:06 np0005485008 nova_compute[192512]: 2025-10-13 15:52:06.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:52:07 np0005485008 nova_compute[192512]: 2025-10-13 15:52:07.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:52:08 np0005485008 nova_compute[192512]: 2025-10-13 15:52:08.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:52:08 np0005485008 nova_compute[192512]: 2025-10-13 15:52:08.865 2 DEBUG oslo_concurrency.lockutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "34caffd6-3092-4933-b1c0-39f0cd6da2b2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:52:08 np0005485008 nova_compute[192512]: 2025-10-13 15:52:08.866 2 DEBUG oslo_concurrency.lockutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "34caffd6-3092-4933-b1c0-39f0cd6da2b2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:52:08 np0005485008 nova_compute[192512]: 2025-10-13 15:52:08.890 2 DEBUG nova.compute.manager [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 13 11:52:08 np0005485008 nova_compute[192512]: 2025-10-13 15:52:08.982 2 DEBUG oslo_concurrency.lockutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:52:08 np0005485008 nova_compute[192512]: 2025-10-13 15:52:08.983 2 DEBUG oslo_concurrency.lockutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:52:08 np0005485008 nova_compute[192512]: 2025-10-13 15:52:08.989 2 DEBUG nova.virt.hardware [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 13 11:52:08 np0005485008 nova_compute[192512]: 2025-10-13 15:52:08.989 2 INFO nova.compute.claims [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.083 2 DEBUG nova.compute.provider_tree [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.099 2 DEBUG nova.scheduler.client.report [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.118 2 DEBUG oslo_concurrency.lockutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.119 2 DEBUG nova.compute.manager [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.167 2 DEBUG nova.compute.manager [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.168 2 DEBUG nova.network.neutron [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.185 2 INFO nova.virt.libvirt.driver [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.207 2 DEBUG nova.compute.manager [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.297 2 DEBUG nova.compute.manager [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.299 2 DEBUG nova.virt.libvirt.driver [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.300 2 INFO nova.virt.libvirt.driver [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Creating image(s)#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.301 2 DEBUG oslo_concurrency.lockutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "/var/lib/nova/instances/34caffd6-3092-4933-b1c0-39f0cd6da2b2/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.301 2 DEBUG oslo_concurrency.lockutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "/var/lib/nova/instances/34caffd6-3092-4933-b1c0-39f0cd6da2b2/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.303 2 DEBUG oslo_concurrency.lockutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "/var/lib/nova/instances/34caffd6-3092-4933-b1c0-39f0cd6da2b2/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.332 2 DEBUG oslo_concurrency.processutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.419 2 DEBUG oslo_concurrency.processutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.421 2 DEBUG oslo_concurrency.lockutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.422 2 DEBUG oslo_concurrency.lockutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.433 2 DEBUG oslo_concurrency.processutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.452 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.493 2 DEBUG oslo_concurrency.processutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.494 2 DEBUG oslo_concurrency.processutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/34caffd6-3092-4933-b1c0-39f0cd6da2b2/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.534 2 DEBUG oslo_concurrency.processutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/34caffd6-3092-4933-b1c0-39f0cd6da2b2/disk 1073741824" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.536 2 DEBUG oslo_concurrency.lockutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.536 2 DEBUG oslo_concurrency.processutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.600 2 DEBUG oslo_concurrency.processutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.601 2 DEBUG nova.virt.disk.api [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Checking if we can resize image /var/lib/nova/instances/34caffd6-3092-4933-b1c0-39f0cd6da2b2/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.601 2 DEBUG oslo_concurrency.processutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/34caffd6-3092-4933-b1c0-39f0cd6da2b2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.677 2 DEBUG oslo_concurrency.processutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/34caffd6-3092-4933-b1c0-39f0cd6da2b2/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.678 2 DEBUG nova.virt.disk.api [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Cannot resize image /var/lib/nova/instances/34caffd6-3092-4933-b1c0-39f0cd6da2b2/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.678 2 DEBUG nova.objects.instance [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lazy-loading 'migration_context' on Instance uuid 34caffd6-3092-4933-b1c0-39f0cd6da2b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.704 2 DEBUG nova.virt.libvirt.driver [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.705 2 DEBUG nova.virt.libvirt.driver [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Ensure instance console log exists: /var/lib/nova/instances/34caffd6-3092-4933-b1c0-39f0cd6da2b2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.705 2 DEBUG oslo_concurrency.lockutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.705 2 DEBUG oslo_concurrency.lockutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:52:09 np0005485008 nova_compute[192512]: 2025-10-13 15:52:09.706 2 DEBUG oslo_concurrency.lockutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:52:09 np0005485008 podman[218649]: 2025-10-13 15:52:09.767281634 +0000 UTC m=+0.066605101 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 11:52:09 np0005485008 podman[218651]: 2025-10-13 15:52:09.770073761 +0000 UTC m=+0.059566641 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 13 11:52:09 np0005485008 podman[218652]: 2025-10-13 15:52:09.780515037 +0000 UTC m=+0.066332092 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 11:52:09 np0005485008 podman[218650]: 2025-10-13 15:52:09.804340632 +0000 UTC m=+0.094141682 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 11:52:09 np0005485008 podman[218658]: 2025-10-13 15:52:09.83533005 +0000 UTC m=+0.118699519 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251009)
Oct 13 11:52:10 np0005485008 nova_compute[192512]: 2025-10-13 15:52:10.036 2 DEBUG nova.network.neutron [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Successfully created port: 9b63c028-5482-465a-8767-fef751fd41e7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 13 11:52:10 np0005485008 nova_compute[192512]: 2025-10-13 15:52:10.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:52:11 np0005485008 nova_compute[192512]: 2025-10-13 15:52:11.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:11 np0005485008 nova_compute[192512]: 2025-10-13 15:52:11.281 2 DEBUG nova.network.neutron [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Successfully updated port: 9b63c028-5482-465a-8767-fef751fd41e7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 13 11:52:11 np0005485008 nova_compute[192512]: 2025-10-13 15:52:11.306 2 DEBUG oslo_concurrency.lockutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "refresh_cache-34caffd6-3092-4933-b1c0-39f0cd6da2b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:52:11 np0005485008 nova_compute[192512]: 2025-10-13 15:52:11.306 2 DEBUG oslo_concurrency.lockutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquired lock "refresh_cache-34caffd6-3092-4933-b1c0-39f0cd6da2b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:52:11 np0005485008 nova_compute[192512]: 2025-10-13 15:52:11.306 2 DEBUG nova.network.neutron [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 11:52:11 np0005485008 nova_compute[192512]: 2025-10-13 15:52:11.408 2 DEBUG nova.compute.manager [req-2220c864-106d-48b8-a4d0-61255c52e77d req-980734b1-a4ba-4da1-a016-883800b4a58e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Received event network-changed-9b63c028-5482-465a-8767-fef751fd41e7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:52:11 np0005485008 nova_compute[192512]: 2025-10-13 15:52:11.408 2 DEBUG nova.compute.manager [req-2220c864-106d-48b8-a4d0-61255c52e77d req-980734b1-a4ba-4da1-a016-883800b4a58e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Refreshing instance network info cache due to event network-changed-9b63c028-5482-465a-8767-fef751fd41e7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 13 11:52:11 np0005485008 nova_compute[192512]: 2025-10-13 15:52:11.409 2 DEBUG oslo_concurrency.lockutils [req-2220c864-106d-48b8-a4d0-61255c52e77d req-980734b1-a4ba-4da1-a016-883800b4a58e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-34caffd6-3092-4933-b1c0-39f0cd6da2b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:52:11 np0005485008 nova_compute[192512]: 2025-10-13 15:52:11.480 2 DEBUG nova.network.neutron [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 13 11:52:14 np0005485008 nova_compute[192512]: 2025-10-13 15:52:14.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.452 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.452 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.453 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.453 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.484 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.484 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.484 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.485 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.551 2 DEBUG nova.network.neutron [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Updating instance_info_cache with network_info: [{"id": "9b63c028-5482-465a-8767-fef751fd41e7", "address": "fa:16:3e:ba:01:56", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b63c028-54", "ovs_interfaceid": "9b63c028-5482-465a-8767-fef751fd41e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.577 2 DEBUG oslo_concurrency.lockutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Releasing lock "refresh_cache-34caffd6-3092-4933-b1c0-39f0cd6da2b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.578 2 DEBUG nova.compute.manager [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Instance network_info: |[{"id": "9b63c028-5482-465a-8767-fef751fd41e7", "address": "fa:16:3e:ba:01:56", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b63c028-54", "ovs_interfaceid": "9b63c028-5482-465a-8767-fef751fd41e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.579 2 DEBUG oslo_concurrency.lockutils [req-2220c864-106d-48b8-a4d0-61255c52e77d req-980734b1-a4ba-4da1-a016-883800b4a58e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-34caffd6-3092-4933-b1c0-39f0cd6da2b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.580 2 DEBUG nova.network.neutron [req-2220c864-106d-48b8-a4d0-61255c52e77d req-980734b1-a4ba-4da1-a016-883800b4a58e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Refreshing network info cache for port 9b63c028-5482-465a-8767-fef751fd41e7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.584 2 DEBUG nova.virt.libvirt.driver [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Start _get_guest_xml network_info=[{"id": "9b63c028-5482-465a-8767-fef751fd41e7", "address": "fa:16:3e:ba:01:56", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b63c028-54", "ovs_interfaceid": "9b63c028-5482-465a-8767-fef751fd41e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'boot_index': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': 'dcd9fbd3-16ab-46e1-976e-0576b433c9d5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.591 2 WARNING nova.virt.libvirt.driver [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.596 2 DEBUG nova.virt.libvirt.host [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.597 2 DEBUG nova.virt.libvirt.host [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.602 2 DEBUG nova.virt.libvirt.host [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.603 2 DEBUG nova.virt.libvirt.host [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.603 2 DEBUG nova.virt.libvirt.driver [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.604 2 DEBUG nova.virt.hardware [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-13T15:39:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ddff5c88-cac9-460e-8ffb-1e9b9a7c2a59',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.604 2 DEBUG nova.virt.hardware [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.604 2 DEBUG nova.virt.hardware [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.605 2 DEBUG nova.virt.hardware [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.605 2 DEBUG nova.virt.hardware [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.605 2 DEBUG nova.virt.hardware [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.606 2 DEBUG nova.virt.hardware [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.606 2 DEBUG nova.virt.hardware [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.606 2 DEBUG nova.virt.hardware [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.606 2 DEBUG nova.virt.hardware [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.607 2 DEBUG nova.virt.hardware [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.611 2 DEBUG nova.virt.libvirt.vif [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T15:52:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-273267175',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-273267175',id=12,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d42867337c94506bc652a0e84c5f849',ramdisk_id='',reservation_id='r-t82fjv20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-870116793',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-870116793-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:52:09Z,user_data=None,user_id='c560a06879cb4d4a861db9e49a3f22ee',uuid=34caffd6-3092-4933-b1c0-39f0cd6da2b2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b63c028-5482-465a-8767-fef751fd41e7", "address": "fa:16:3e:ba:01:56", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b63c028-54", "ovs_interfaceid": "9b63c028-5482-465a-8767-fef751fd41e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.612 2 DEBUG nova.network.os_vif_util [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Converting VIF {"id": "9b63c028-5482-465a-8767-fef751fd41e7", "address": "fa:16:3e:ba:01:56", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b63c028-54", "ovs_interfaceid": "9b63c028-5482-465a-8767-fef751fd41e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.612 2 DEBUG nova.network.os_vif_util [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:01:56,bridge_name='br-int',has_traffic_filtering=True,id=9b63c028-5482-465a-8767-fef751fd41e7,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b63c028-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.613 2 DEBUG nova.objects.instance [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lazy-loading 'pci_devices' on Instance uuid 34caffd6-3092-4933-b1c0-39f0cd6da2b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.635 2 DEBUG nova.virt.libvirt.driver [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] End _get_guest_xml xml=<domain type="kvm">
Oct 13 11:52:15 np0005485008 nova_compute[192512]:  <uuid>34caffd6-3092-4933-b1c0-39f0cd6da2b2</uuid>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:  <name>instance-0000000c</name>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:  <memory>131072</memory>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:  <vcpu>1</vcpu>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:  <metadata>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <nova:name>tempest-TestExecuteHostMaintenanceStrategy-server-273267175</nova:name>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <nova:creationTime>2025-10-13 15:52:15</nova:creationTime>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <nova:flavor name="m1.nano">
Oct 13 11:52:15 np0005485008 nova_compute[192512]:        <nova:memory>128</nova:memory>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:        <nova:disk>1</nova:disk>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:        <nova:swap>0</nova:swap>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:        <nova:ephemeral>0</nova:ephemeral>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:        <nova:vcpus>1</nova:vcpus>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      </nova:flavor>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <nova:owner>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:        <nova:user uuid="c560a06879cb4d4a861db9e49a3f22ee">tempest-TestExecuteHostMaintenanceStrategy-870116793-project-admin</nova:user>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:        <nova:project uuid="4d42867337c94506bc652a0e84c5f849">tempest-TestExecuteHostMaintenanceStrategy-870116793</nova:project>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      </nova:owner>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <nova:root type="image" uuid="dcd9fbd3-16ab-46e1-976e-0576b433c9d5"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <nova:ports>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:        <nova:port uuid="9b63c028-5482-465a-8767-fef751fd41e7">
Oct 13 11:52:15 np0005485008 nova_compute[192512]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:        </nova:port>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      </nova:ports>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    </nova:instance>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:  </metadata>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:  <sysinfo type="smbios">
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <system>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <entry name="manufacturer">RDO</entry>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <entry name="product">OpenStack Compute</entry>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <entry name="serial">34caffd6-3092-4933-b1c0-39f0cd6da2b2</entry>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <entry name="uuid">34caffd6-3092-4933-b1c0-39f0cd6da2b2</entry>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <entry name="family">Virtual Machine</entry>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    </system>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:  </sysinfo>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:  <os>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <boot dev="hd"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <smbios mode="sysinfo"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:  </os>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:  <features>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <acpi/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <apic/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <vmcoreinfo/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:  </features>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:  <clock offset="utc">
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <timer name="pit" tickpolicy="delay"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <timer name="hpet" present="no"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:  </clock>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:  <cpu mode="host-model" match="exact">
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <topology sockets="1" cores="1" threads="1"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:  </cpu>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:  <devices>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <disk type="file" device="disk">
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <driver name="qemu" type="qcow2" cache="none"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/34caffd6-3092-4933-b1c0-39f0cd6da2b2/disk"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <target dev="vda" bus="virtio"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    </disk>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <disk type="file" device="cdrom">
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <driver name="qemu" type="raw" cache="none"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/34caffd6-3092-4933-b1c0-39f0cd6da2b2/disk.config"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <target dev="sda" bus="sata"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    </disk>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <interface type="ethernet">
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <mac address="fa:16:3e:ba:01:56"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <driver name="vhost" rx_queue_size="512"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <mtu size="1442"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <target dev="tap9b63c028-54"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    </interface>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <serial type="pty">
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <log file="/var/lib/nova/instances/34caffd6-3092-4933-b1c0-39f0cd6da2b2/console.log" append="off"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    </serial>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <video>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    </video>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <input type="tablet" bus="usb"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <rng model="virtio">
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <backend model="random">/dev/urandom</backend>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    </rng>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <controller type="usb" index="0"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    <memballoon model="virtio">
Oct 13 11:52:15 np0005485008 nova_compute[192512]:      <stats period="10"/>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:    </memballoon>
Oct 13 11:52:15 np0005485008 nova_compute[192512]:  </devices>
Oct 13 11:52:15 np0005485008 nova_compute[192512]: </domain>
Oct 13 11:52:15 np0005485008 nova_compute[192512]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.637 2 DEBUG nova.compute.manager [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Preparing to wait for external event network-vif-plugged-9b63c028-5482-465a-8767-fef751fd41e7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.638 2 DEBUG oslo_concurrency.lockutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "34caffd6-3092-4933-b1c0-39f0cd6da2b2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.638 2 DEBUG oslo_concurrency.lockutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "34caffd6-3092-4933-b1c0-39f0cd6da2b2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.638 2 DEBUG oslo_concurrency.lockutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "34caffd6-3092-4933-b1c0-39f0cd6da2b2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.639 2 DEBUG nova.virt.libvirt.vif [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T15:52:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-273267175',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-273267175',id=12,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d42867337c94506bc652a0e84c5f849',ramdisk_id='',reservation_id='r-t82fjv20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-870116793',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-870116793-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:52:09Z,user_data=None,user_id='c560a06879cb4d4a861db9e49a3f22ee',uuid=34caffd6-3092-4933-b1c0-39f0cd6da2b2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b63c028-5482-465a-8767-fef751fd41e7", "address": "fa:16:3e:ba:01:56", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b63c028-54", "ovs_interfaceid": "9b63c028-5482-465a-8767-fef751fd41e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.640 2 DEBUG nova.network.os_vif_util [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Converting VIF {"id": "9b63c028-5482-465a-8767-fef751fd41e7", "address": "fa:16:3e:ba:01:56", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b63c028-54", "ovs_interfaceid": "9b63c028-5482-465a-8767-fef751fd41e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.641 2 DEBUG nova.network.os_vif_util [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:01:56,bridge_name='br-int',has_traffic_filtering=True,id=9b63c028-5482-465a-8767-fef751fd41e7,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b63c028-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.641 2 DEBUG os_vif [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:01:56,bridge_name='br-int',has_traffic_filtering=True,id=9b63c028-5482-465a-8767-fef751fd41e7,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b63c028-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.644 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.648 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b63c028-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.649 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9b63c028-54, col_values=(('external_ids', {'iface-id': '9b63c028-5482-465a-8767-fef751fd41e7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:01:56', 'vm-uuid': '34caffd6-3092-4933-b1c0-39f0cd6da2b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:52:15 np0005485008 NetworkManager[51587]: <info>  [1760370735.6855] manager: (tap9b63c028-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.691 2 INFO os_vif [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:01:56,bridge_name='br-int',has_traffic_filtering=True,id=9b63c028-5482-465a-8767-fef751fd41e7,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b63c028-54')#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.755 2 DEBUG nova.virt.libvirt.driver [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.755 2 DEBUG nova.virt.libvirt.driver [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.756 2 DEBUG nova.virt.libvirt.driver [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] No VIF found with MAC fa:16:3e:ba:01:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.756 2 INFO nova.virt.libvirt.driver [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Using config drive#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.814 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.816 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5868MB free_disk=73.46557235717773GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.816 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.816 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.910 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance 34caffd6-3092-4933-b1c0-39f0cd6da2b2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.910 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.911 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.982 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:52:15 np0005485008 nova_compute[192512]: 2025-10-13 15:52:15.996 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:52:16 np0005485008 nova_compute[192512]: 2025-10-13 15:52:16.016 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 11:52:16 np0005485008 nova_compute[192512]: 2025-10-13 15:52:16.017 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:52:16 np0005485008 nova_compute[192512]: 2025-10-13 15:52:16.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:17 np0005485008 nova_compute[192512]: 2025-10-13 15:52:17.157 2 INFO nova.virt.libvirt.driver [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Creating config drive at /var/lib/nova/instances/34caffd6-3092-4933-b1c0-39f0cd6da2b2/disk.config#033[00m
Oct 13 11:52:17 np0005485008 nova_compute[192512]: 2025-10-13 15:52:17.162 2 DEBUG oslo_concurrency.processutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/34caffd6-3092-4933-b1c0-39f0cd6da2b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_unm20zq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:52:17 np0005485008 nova_compute[192512]: 2025-10-13 15:52:17.293 2 DEBUG oslo_concurrency.processutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/34caffd6-3092-4933-b1c0-39f0cd6da2b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_unm20zq" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:52:17 np0005485008 kernel: tap9b63c028-54: entered promiscuous mode
Oct 13 11:52:17 np0005485008 NetworkManager[51587]: <info>  [1760370737.3617] manager: (tap9b63c028-54): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Oct 13 11:52:17 np0005485008 nova_compute[192512]: 2025-10-13 15:52:17.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:17 np0005485008 ovn_controller[94758]: 2025-10-13T15:52:17Z|00129|binding|INFO|Claiming lport 9b63c028-5482-465a-8767-fef751fd41e7 for this chassis.
Oct 13 11:52:17 np0005485008 ovn_controller[94758]: 2025-10-13T15:52:17Z|00130|binding|INFO|9b63c028-5482-465a-8767-fef751fd41e7: Claiming fa:16:3e:ba:01:56 10.100.0.3
Oct 13 11:52:17 np0005485008 ovn_controller[94758]: 2025-10-13T15:52:17Z|00131|binding|INFO|Setting lport 9b63c028-5482-465a-8767-fef751fd41e7 ovn-installed in OVS
Oct 13 11:52:17 np0005485008 nova_compute[192512]: 2025-10-13 15:52:17.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:17 np0005485008 ovn_controller[94758]: 2025-10-13T15:52:17Z|00132|binding|INFO|Setting lport 9b63c028-5482-465a-8767-fef751fd41e7 up in Southbound
Oct 13 11:52:17 np0005485008 nova_compute[192512]: 2025-10-13 15:52:17.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.381 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:01:56 10.100.0.3'], port_security=['fa:16:3e:ba:01:56 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '34caffd6-3092-4933-b1c0-39f0cd6da2b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d42867337c94506bc652a0e84c5f849', 'neutron:revision_number': '2', 'neutron:security_group_ids': '58c21b61-d3dc-4c91-8143-e19c227ff89f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97a3a12d-8a89-43e2-9a6d-7c3bdc21ad02, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=9b63c028-5482-465a-8767-fef751fd41e7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.383 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 9b63c028-5482-465a-8767-fef751fd41e7 in datapath ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6 bound to our chassis#033[00m
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.384 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6#033[00m
Oct 13 11:52:17 np0005485008 systemd-udevd[218772]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.402 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[65630193-58bd-4780-8486-f3f3b7018877]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.403 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH taped6faec0-81 in ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 13 11:52:17 np0005485008 systemd-machined[152551]: New machine qemu-10-instance-0000000c.
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.406 214965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface taped6faec0-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.406 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[0f29fd80-9ce1-4884-a1cd-f3276539a4fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.408 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[67663b83-3066-4072-b60c-4276d96e3506]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:52:17 np0005485008 NetworkManager[51587]: <info>  [1760370737.4171] device (tap9b63c028-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 11:52:17 np0005485008 systemd[1]: Started Virtual Machine qemu-10-instance-0000000c.
Oct 13 11:52:17 np0005485008 NetworkManager[51587]: <info>  [1760370737.4184] device (tap9b63c028-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.420 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[b3b7c92e-4244-4915-99ea-64ba785d77cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.447 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[04595cc2-ba45-40a7-b4e6-fecdd6f2de1d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.476 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[afe1fe54-60f3-4ba4-8ae5-c09c963d2611]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.481 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[2afab81d-1c9b-4daa-9650-5779a10d9ceb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:52:17 np0005485008 NetworkManager[51587]: <info>  [1760370737.4828] manager: (taped6faec0-80): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Oct 13 11:52:17 np0005485008 systemd-udevd[218776]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.517 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[75483813-7936-4382-8c7e-f3e08e9dcb7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.521 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[f20fca18-aaf1-4519-997f-2911a768a431]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:52:17 np0005485008 NetworkManager[51587]: <info>  [1760370737.5485] device (taped6faec0-80): carrier: link connected
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.556 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe0459f-c9c1-4a4f-9181-c9ab2d951557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.577 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[b5c35d97-819f-4ec3-a35d-290daf545af1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped6faec0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:53:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427521, 'reachable_time': 28294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 218805, 'error': None, 'target': 'ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.596 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[90ae6370-4e69-4b26-9fcd-8d3a576d8c4a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:5332'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 427521, 'tstamp': 427521}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 218808, 'error': None, 'target': 'ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.619 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[23bc195a-c2fb-42a4-a676-66d2ef06012f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped6faec0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:53:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427521, 'reachable_time': 28294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 218813, 'error': None, 'target': 'ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.657 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[3b3fbe83-9aca-4137-a99c-b7a5a4ff801e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:52:17 np0005485008 nova_compute[192512]: 2025-10-13 15:52:17.681 2 DEBUG nova.compute.manager [req-e0749f8f-da88-40b6-8698-25750535c65e req-6e62c42d-2097-48e9-a5ca-d9eda252abbf 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Received event network-vif-plugged-9b63c028-5482-465a-8767-fef751fd41e7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:52:17 np0005485008 nova_compute[192512]: 2025-10-13 15:52:17.682 2 DEBUG oslo_concurrency.lockutils [req-e0749f8f-da88-40b6-8698-25750535c65e req-6e62c42d-2097-48e9-a5ca-d9eda252abbf 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "34caffd6-3092-4933-b1c0-39f0cd6da2b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:52:17 np0005485008 nova_compute[192512]: 2025-10-13 15:52:17.682 2 DEBUG oslo_concurrency.lockutils [req-e0749f8f-da88-40b6-8698-25750535c65e req-6e62c42d-2097-48e9-a5ca-d9eda252abbf 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "34caffd6-3092-4933-b1c0-39f0cd6da2b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:52:17 np0005485008 nova_compute[192512]: 2025-10-13 15:52:17.683 2 DEBUG oslo_concurrency.lockutils [req-e0749f8f-da88-40b6-8698-25750535c65e req-6e62c42d-2097-48e9-a5ca-d9eda252abbf 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "34caffd6-3092-4933-b1c0-39f0cd6da2b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:52:17 np0005485008 nova_compute[192512]: 2025-10-13 15:52:17.683 2 DEBUG nova.compute.manager [req-e0749f8f-da88-40b6-8698-25750535c65e req-6e62c42d-2097-48e9-a5ca-d9eda252abbf 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Processing event network-vif-plugged-9b63c028-5482-465a-8767-fef751fd41e7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.733 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[1072bf65-4f52-4139-bc92-3161d93628d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.735 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped6faec0-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.735 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.736 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped6faec0-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:52:17 np0005485008 NetworkManager[51587]: <info>  [1760370737.7390] manager: (taped6faec0-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Oct 13 11:52:17 np0005485008 kernel: taped6faec0-80: entered promiscuous mode
Oct 13 11:52:17 np0005485008 nova_compute[192512]: 2025-10-13 15:52:17.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.741 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped6faec0-80, col_values=(('external_ids', {'iface-id': '6f3de1b2-b216-4652-b39f-5d38c68f9bbc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:52:17 np0005485008 ovn_controller[94758]: 2025-10-13T15:52:17Z|00133|binding|INFO|Releasing lport 6f3de1b2-b216-4652-b39f-5d38c68f9bbc from this chassis (sb_readonly=0)
Oct 13 11:52:17 np0005485008 nova_compute[192512]: 2025-10-13 15:52:17.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:17 np0005485008 nova_compute[192512]: 2025-10-13 15:52:17.748 2 DEBUG nova.network.neutron [req-2220c864-106d-48b8-a4d0-61255c52e77d req-980734b1-a4ba-4da1-a016-883800b4a58e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Updated VIF entry in instance network info cache for port 9b63c028-5482-465a-8767-fef751fd41e7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 13 11:52:17 np0005485008 nova_compute[192512]: 2025-10-13 15:52:17.749 2 DEBUG nova.network.neutron [req-2220c864-106d-48b8-a4d0-61255c52e77d req-980734b1-a4ba-4da1-a016-883800b4a58e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Updating instance_info_cache with network_info: [{"id": "9b63c028-5482-465a-8767-fef751fd41e7", "address": "fa:16:3e:ba:01:56", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b63c028-54", "ovs_interfaceid": "9b63c028-5482-465a-8767-fef751fd41e7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:52:17 np0005485008 nova_compute[192512]: 2025-10-13 15:52:17.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.755 103642 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.757 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[edb913c8-ac7e-43af-9eb1-7ca5dc21cbb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.758 103642 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: global
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]:    log         /dev/log local0 debug
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]:    log-tag     haproxy-metadata-proxy-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]:    user        root
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]:    group       root
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]:    maxconn     1024
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]:    pidfile     /var/lib/neutron/external/pids/ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6.pid.haproxy
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]:    daemon
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: defaults
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]:    log global
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]:    mode http
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]:    option httplog
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]:    option dontlognull
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]:    option http-server-close
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]:    option forwardfor
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]:    retries                 3
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]:    timeout http-request    30s
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]:    timeout connect         30s
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]:    timeout client          32s
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]:    timeout server          32s
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]:    timeout http-keep-alive 30s
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: listen listener
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]:    bind 169.254.169.254:80
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]:    server metadata /var/lib/neutron/metadata_proxy
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]:    http-request add-header X-OVN-Network-ID ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 13 11:52:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:17.759 103642 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'env', 'PROCESS_TAG=haproxy-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 13 11:52:17 np0005485008 nova_compute[192512]: 2025-10-13 15:52:17.766 2 DEBUG oslo_concurrency.lockutils [req-2220c864-106d-48b8-a4d0-61255c52e77d req-980734b1-a4ba-4da1-a016-883800b4a58e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-34caffd6-3092-4933-b1c0-39f0cd6da2b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.081 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370738.0811224, 34caffd6-3092-4933-b1c0-39f0cd6da2b2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.082 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] VM Started (Lifecycle Event)#033[00m
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.085 2 DEBUG nova.compute.manager [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.088 2 DEBUG nova.virt.libvirt.driver [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.094 2 INFO nova.virt.libvirt.driver [-] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Instance spawned successfully.#033[00m
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.094 2 DEBUG nova.virt.libvirt.driver [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.106 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.112 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.115 2 DEBUG nova.virt.libvirt.driver [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.115 2 DEBUG nova.virt.libvirt.driver [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.116 2 DEBUG nova.virt.libvirt.driver [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.116 2 DEBUG nova.virt.libvirt.driver [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.116 2 DEBUG nova.virt.libvirt.driver [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.117 2 DEBUG nova.virt.libvirt.driver [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:52:18 np0005485008 podman[218846]: 2025-10-13 15:52:18.149910357 +0000 UTC m=+0.052161310 container create 9c5f88e3c7b4b5a8639356c74a33e66ad94a801637da03cd2dca43b0925f2376 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.153 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.154 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370738.082194, 34caffd6-3092-4933-b1c0-39f0cd6da2b2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.154 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] VM Paused (Lifecycle Event)#033[00m
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.180 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.186 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370738.0875068, 34caffd6-3092-4933-b1c0-39f0cd6da2b2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.186 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] VM Resumed (Lifecycle Event)#033[00m
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.190 2 INFO nova.compute.manager [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Took 8.89 seconds to spawn the instance on the hypervisor.#033[00m
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.190 2 DEBUG nova.compute.manager [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:52:18 np0005485008 systemd[1]: Started libpod-conmon-9c5f88e3c7b4b5a8639356c74a33e66ad94a801637da03cd2dca43b0925f2376.scope.
Oct 13 11:52:18 np0005485008 podman[218846]: 2025-10-13 15:52:18.122689227 +0000 UTC m=+0.024940200 image pull f5d8e43d5fd113645e71c7e8980543c233298a7561a4c95ab3af27cb119e70b7 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.221 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.224 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 11:52:18 np0005485008 systemd[1]: Started libcrun container.
Oct 13 11:52:18 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0bef52106b9f76555ca99aa2daab8c3df62f1d63055c7ebe5dc2f38e5e022f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 11:52:18 np0005485008 podman[218846]: 2025-10-13 15:52:18.246506123 +0000 UTC m=+0.148757076 container init 9c5f88e3c7b4b5a8639356c74a33e66ad94a801637da03cd2dca43b0925f2376 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:52:18 np0005485008 podman[218846]: 2025-10-13 15:52:18.252584683 +0000 UTC m=+0.154835636 container start 9c5f88e3c7b4b5a8639356c74a33e66ad94a801637da03cd2dca43b0925f2376 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.263 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 13 11:52:18 np0005485008 neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6[218861]: [NOTICE]   (218865) : New worker (218867) forked
Oct 13 11:52:18 np0005485008 neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6[218861]: [NOTICE]   (218865) : Loading success.
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.283 2 INFO nova.compute.manager [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Took 9.33 seconds to build instance.#033[00m
Oct 13 11:52:18 np0005485008 nova_compute[192512]: 2025-10-13 15:52:18.309 2 DEBUG oslo_concurrency.lockutils [None req-1f1d9cb4-bd7b-4c7a-882d-e8c55ca3d7e0 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "34caffd6-3092-4933-b1c0-39f0cd6da2b2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.443s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:52:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:52:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:52:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:52:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:52:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:52:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:52:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:52:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:52:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:52:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:52:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:52:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:52:19 np0005485008 nova_compute[192512]: 2025-10-13 15:52:19.769 2 DEBUG nova.compute.manager [req-73779ab5-1435-4c06-935c-ff65f1c31014 req-726d9b22-24ab-43d4-87e1-ebd21f9d9a85 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Received event network-vif-plugged-9b63c028-5482-465a-8767-fef751fd41e7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:52:19 np0005485008 nova_compute[192512]: 2025-10-13 15:52:19.772 2 DEBUG oslo_concurrency.lockutils [req-73779ab5-1435-4c06-935c-ff65f1c31014 req-726d9b22-24ab-43d4-87e1-ebd21f9d9a85 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "34caffd6-3092-4933-b1c0-39f0cd6da2b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:52:19 np0005485008 nova_compute[192512]: 2025-10-13 15:52:19.773 2 DEBUG oslo_concurrency.lockutils [req-73779ab5-1435-4c06-935c-ff65f1c31014 req-726d9b22-24ab-43d4-87e1-ebd21f9d9a85 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "34caffd6-3092-4933-b1c0-39f0cd6da2b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:52:19 np0005485008 nova_compute[192512]: 2025-10-13 15:52:19.773 2 DEBUG oslo_concurrency.lockutils [req-73779ab5-1435-4c06-935c-ff65f1c31014 req-726d9b22-24ab-43d4-87e1-ebd21f9d9a85 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "34caffd6-3092-4933-b1c0-39f0cd6da2b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:52:19 np0005485008 nova_compute[192512]: 2025-10-13 15:52:19.773 2 DEBUG nova.compute.manager [req-73779ab5-1435-4c06-935c-ff65f1c31014 req-726d9b22-24ab-43d4-87e1-ebd21f9d9a85 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] No waiting events found dispatching network-vif-plugged-9b63c028-5482-465a-8767-fef751fd41e7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:52:19 np0005485008 nova_compute[192512]: 2025-10-13 15:52:19.774 2 WARNING nova.compute.manager [req-73779ab5-1435-4c06-935c-ff65f1c31014 req-726d9b22-24ab-43d4-87e1-ebd21f9d9a85 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Received unexpected event network-vif-plugged-9b63c028-5482-465a-8767-fef751fd41e7 for instance with vm_state active and task_state None.#033[00m
Oct 13 11:52:20 np0005485008 nova_compute[192512]: 2025-10-13 15:52:20.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:21 np0005485008 nova_compute[192512]: 2025-10-13 15:52:21.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:23 np0005485008 podman[218876]: 2025-10-13 15:52:23.771340933 +0000 UTC m=+0.070796282 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 13 11:52:25 np0005485008 nova_compute[192512]: 2025-10-13 15:52:25.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:26 np0005485008 nova_compute[192512]: 2025-10-13 15:52:26.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:30 np0005485008 nova_compute[192512]: 2025-10-13 15:52:30.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:30 np0005485008 ovn_controller[94758]: 2025-10-13T15:52:30Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ba:01:56 10.100.0.3
Oct 13 11:52:30 np0005485008 ovn_controller[94758]: 2025-10-13T15:52:30Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:01:56 10.100.0.3
Oct 13 11:52:31 np0005485008 nova_compute[192512]: 2025-10-13 15:52:31.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:33.955 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:52:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:33.956 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:52:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:52:33.957 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:52:35 np0005485008 podman[202884]: time="2025-10-13T15:52:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:52:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:52:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 11:52:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:52:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3456 "" "Go-http-client/1.1"
Oct 13 11:52:35 np0005485008 nova_compute[192512]: 2025-10-13 15:52:35.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:36 np0005485008 nova_compute[192512]: 2025-10-13 15:52:36.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:40 np0005485008 nova_compute[192512]: 2025-10-13 15:52:40.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:40 np0005485008 podman[218912]: 2025-10-13 15:52:40.77400627 +0000 UTC m=+0.058737046 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:52:40 np0005485008 podman[218910]: 2025-10-13 15:52:40.784340353 +0000 UTC m=+0.077150851 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 13 11:52:40 np0005485008 podman[218911]: 2025-10-13 15:52:40.801756447 +0000 UTC m=+0.091592882 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct 13 11:52:40 np0005485008 podman[218913]: 2025-10-13 15:52:40.808799616 +0000 UTC m=+0.088856245 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 11:52:40 np0005485008 podman[218924]: 2025-10-13 15:52:40.822023399 +0000 UTC m=+0.098084604 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:52:41 np0005485008 nova_compute[192512]: 2025-10-13 15:52:41.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:45 np0005485008 nova_compute[192512]: 2025-10-13 15:52:45.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:46 np0005485008 nova_compute[192512]: 2025-10-13 15:52:46.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:47 np0005485008 ovn_controller[94758]: 2025-10-13T15:52:47Z|00134|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Oct 13 11:52:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:52:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:52:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:52:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:52:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:52:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:52:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:52:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:52:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:52:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:52:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:52:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:52:50 np0005485008 nova_compute[192512]: 2025-10-13 15:52:50.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:51 np0005485008 nova_compute[192512]: 2025-10-13 15:52:51.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:54 np0005485008 podman[219012]: 2025-10-13 15:52:54.757563877 +0000 UTC m=+0.058608391 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, version=9.6, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, config_id=edpm, vcs-type=git, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 13 11:52:55 np0005485008 nova_compute[192512]: 2025-10-13 15:52:55.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:52:56 np0005485008 nova_compute[192512]: 2025-10-13 15:52:56.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:00 np0005485008 nova_compute[192512]: 2025-10-13 15:53:00.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:01 np0005485008 nova_compute[192512]: 2025-10-13 15:53:01.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:04 np0005485008 nova_compute[192512]: 2025-10-13 15:53:04.992 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:53:04 np0005485008 nova_compute[192512]: 2025-10-13 15:53:04.993 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 11:53:05 np0005485008 nova_compute[192512]: 2025-10-13 15:53:05.526 2 DEBUG nova.virt.libvirt.driver [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Creating tmpfile /var/lib/nova/instances/tmpwc6n_c2m to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Oct 13 11:53:05 np0005485008 nova_compute[192512]: 2025-10-13 15:53:05.527 2 DEBUG nova.compute.manager [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwc6n_c2m',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Oct 13 11:53:05 np0005485008 podman[202884]: time="2025-10-13T15:53:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:53:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:53:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 11:53:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:53:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3463 "" "Go-http-client/1.1"
Oct 13 11:53:05 np0005485008 nova_compute[192512]: 2025-10-13 15:53:05.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:06 np0005485008 nova_compute[192512]: 2025-10-13 15:53:06.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:07 np0005485008 nova_compute[192512]: 2025-10-13 15:53:07.388 2 DEBUG nova.compute.manager [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwc6n_c2m',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='02608ecf-b689-40e6-b30e-84dbb0884e27',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Oct 13 11:53:07 np0005485008 nova_compute[192512]: 2025-10-13 15:53:07.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:53:07 np0005485008 nova_compute[192512]: 2025-10-13 15:53:07.452 2 DEBUG oslo_concurrency.lockutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-02608ecf-b689-40e6-b30e-84dbb0884e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:53:07 np0005485008 nova_compute[192512]: 2025-10-13 15:53:07.452 2 DEBUG oslo_concurrency.lockutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-02608ecf-b689-40e6-b30e-84dbb0884e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:53:07 np0005485008 nova_compute[192512]: 2025-10-13 15:53:07.452 2 DEBUG nova.network.neutron [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 11:53:08 np0005485008 nova_compute[192512]: 2025-10-13 15:53:08.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:53:10 np0005485008 nova_compute[192512]: 2025-10-13 15:53:10.424 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:53:10 np0005485008 nova_compute[192512]: 2025-10-13 15:53:10.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:53:10 np0005485008 nova_compute[192512]: 2025-10-13 15:53:10.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.448 2 DEBUG nova.network.neutron [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Updating instance_info_cache with network_info: [{"id": "a0c10d01-f73e-4b3d-b306-a905f1ab35e0", "address": "fa:16:3e:42:74:74", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0c10d01-f7", "ovs_interfaceid": "a0c10d01-f73e-4b3d-b306-a905f1ab35e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.482 2 DEBUG oslo_concurrency.lockutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-02608ecf-b689-40e6-b30e-84dbb0884e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.483 2 DEBUG nova.virt.libvirt.driver [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwc6n_c2m',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='02608ecf-b689-40e6-b30e-84dbb0884e27',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.484 2 DEBUG nova.virt.libvirt.driver [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Creating instance directory: /var/lib/nova/instances/02608ecf-b689-40e6-b30e-84dbb0884e27 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.484 2 DEBUG nova.virt.libvirt.driver [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Creating disk.info with the contents: {'/var/lib/nova/instances/02608ecf-b689-40e6-b30e-84dbb0884e27/disk': 'qcow2', '/var/lib/nova/instances/02608ecf-b689-40e6-b30e-84dbb0884e27/disk.config': 'raw'} pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10854#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.485 2 DEBUG nova.virt.libvirt.driver [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10864#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.485 2 DEBUG nova.objects.instance [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 02608ecf-b689-40e6-b30e-84dbb0884e27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.577 2 DEBUG oslo_concurrency.processutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.636 2 DEBUG oslo_concurrency.processutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.637 2 DEBUG oslo_concurrency.lockutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.638 2 DEBUG oslo_concurrency.lockutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.650 2 DEBUG oslo_concurrency.processutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.711 2 DEBUG oslo_concurrency.processutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.713 2 DEBUG oslo_concurrency.processutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/02608ecf-b689-40e6-b30e-84dbb0884e27/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.755 2 DEBUG oslo_concurrency.processutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/02608ecf-b689-40e6-b30e-84dbb0884e27/disk 1073741824" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.756 2 DEBUG oslo_concurrency.lockutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.757 2 DEBUG oslo_concurrency.processutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:53:11 np0005485008 podman[219042]: 2025-10-13 15:53:11.777276317 +0000 UTC m=+0.062327858 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 11:53:11 np0005485008 podman[219041]: 2025-10-13 15:53:11.800359918 +0000 UTC m=+0.091463617 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true)
Oct 13 11:53:11 np0005485008 podman[219039]: 2025-10-13 15:53:11.802401871 +0000 UTC m=+0.100650734 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 13 11:53:11 np0005485008 podman[219040]: 2025-10-13 15:53:11.80618674 +0000 UTC m=+0.101123249 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:53:11 np0005485008 podman[219048]: 2025-10-13 15:53:11.812406094 +0000 UTC m=+0.094040348 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.817 2 DEBUG oslo_concurrency.processutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.821 2 DEBUG nova.virt.disk.api [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Checking if we can resize image /var/lib/nova/instances/02608ecf-b689-40e6-b30e-84dbb0884e27/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.821 2 DEBUG oslo_concurrency.processutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/02608ecf-b689-40e6-b30e-84dbb0884e27/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.877 2 DEBUG oslo_concurrency.processutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/02608ecf-b689-40e6-b30e-84dbb0884e27/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.878 2 DEBUG nova.virt.disk.api [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Cannot resize image /var/lib/nova/instances/02608ecf-b689-40e6-b30e-84dbb0884e27/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.879 2 DEBUG nova.objects.instance [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'migration_context' on Instance uuid 02608ecf-b689-40e6-b30e-84dbb0884e27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.891 2 DEBUG oslo_concurrency.processutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/02608ecf-b689-40e6-b30e-84dbb0884e27/disk.config 485376 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.919 2 DEBUG oslo_concurrency.processutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/02608ecf-b689-40e6-b30e-84dbb0884e27/disk.config 485376" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.920 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Copying file compute-0.ctlplane.example.com:/var/lib/nova/instances/02608ecf-b689-40e6-b30e-84dbb0884e27/disk.config to /var/lib/nova/instances/02608ecf-b689-40e6-b30e-84dbb0884e27 copy_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:103#033[00m
Oct 13 11:53:11 np0005485008 nova_compute[192512]: 2025-10-13 15:53:11.921 2 DEBUG oslo_concurrency.processutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/02608ecf-b689-40e6-b30e-84dbb0884e27/disk.config /var/lib/nova/instances/02608ecf-b689-40e6-b30e-84dbb0884e27 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:53:12 np0005485008 nova_compute[192512]: 2025-10-13 15:53:12.411 2 DEBUG oslo_concurrency.processutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/02608ecf-b689-40e6-b30e-84dbb0884e27/disk.config /var/lib/nova/instances/02608ecf-b689-40e6-b30e-84dbb0884e27" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:53:12 np0005485008 nova_compute[192512]: 2025-10-13 15:53:12.412 2 DEBUG nova.virt.libvirt.driver [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Oct 13 11:53:12 np0005485008 nova_compute[192512]: 2025-10-13 15:53:12.413 2 DEBUG nova.virt.libvirt.vif [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T15:51:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-1956224104',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-1956224104',id=11,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T15:52:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4d42867337c94506bc652a0e84c5f849',ramdisk_id='',reservation_id='r-qkclbo34',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-870116793',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-870116793-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:52:03Z,user_data=None,user_id='c560a06879cb4d4a861db9e49a3f22ee',uuid=02608ecf-b689-40e6-b30e-84dbb0884e27,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a0c10d01-f73e-4b3d-b306-a905f1ab35e0", "address": "fa:16:3e:42:74:74", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapa0c10d01-f7", "ovs_interfaceid": "a0c10d01-f73e-4b3d-b306-a905f1ab35e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 11:53:12 np0005485008 nova_compute[192512]: 2025-10-13 15:53:12.414 2 DEBUG nova.network.os_vif_util [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converting VIF {"id": "a0c10d01-f73e-4b3d-b306-a905f1ab35e0", "address": "fa:16:3e:42:74:74", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapa0c10d01-f7", "ovs_interfaceid": "a0c10d01-f73e-4b3d-b306-a905f1ab35e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:53:12 np0005485008 nova_compute[192512]: 2025-10-13 15:53:12.415 2 DEBUG nova.network.os_vif_util [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:42:74:74,bridge_name='br-int',has_traffic_filtering=True,id=a0c10d01-f73e-4b3d-b306-a905f1ab35e0,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0c10d01-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:53:12 np0005485008 nova_compute[192512]: 2025-10-13 15:53:12.415 2 DEBUG os_vif [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:74:74,bridge_name='br-int',has_traffic_filtering=True,id=a0c10d01-f73e-4b3d-b306-a905f1ab35e0,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0c10d01-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 11:53:12 np0005485008 nova_compute[192512]: 2025-10-13 15:53:12.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:12 np0005485008 nova_compute[192512]: 2025-10-13 15:53:12.416 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:53:12 np0005485008 nova_compute[192512]: 2025-10-13 15:53:12.416 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:53:12 np0005485008 nova_compute[192512]: 2025-10-13 15:53:12.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:12 np0005485008 nova_compute[192512]: 2025-10-13 15:53:12.420 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0c10d01-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:53:12 np0005485008 nova_compute[192512]: 2025-10-13 15:53:12.421 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa0c10d01-f7, col_values=(('external_ids', {'iface-id': 'a0c10d01-f73e-4b3d-b306-a905f1ab35e0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:42:74:74', 'vm-uuid': '02608ecf-b689-40e6-b30e-84dbb0884e27'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:53:12 np0005485008 nova_compute[192512]: 2025-10-13 15:53:12.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:12 np0005485008 NetworkManager[51587]: <info>  [1760370792.4253] manager: (tapa0c10d01-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Oct 13 11:53:12 np0005485008 nova_compute[192512]: 2025-10-13 15:53:12.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 11:53:12 np0005485008 nova_compute[192512]: 2025-10-13 15:53:12.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:12 np0005485008 nova_compute[192512]: 2025-10-13 15:53:12.432 2 INFO os_vif [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:74:74,bridge_name='br-int',has_traffic_filtering=True,id=a0c10d01-f73e-4b3d-b306-a905f1ab35e0,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0c10d01-f7')#033[00m
Oct 13 11:53:12 np0005485008 nova_compute[192512]: 2025-10-13 15:53:12.433 2 DEBUG nova.virt.libvirt.driver [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Oct 13 11:53:12 np0005485008 nova_compute[192512]: 2025-10-13 15:53:12.433 2 DEBUG nova.compute.manager [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwc6n_c2m',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='02608ecf-b689-40e6-b30e-84dbb0884e27',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Oct 13 11:53:12 np0005485008 nova_compute[192512]: 2025-10-13 15:53:12.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:12 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:12.985 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:53:12 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:12.985 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 11:53:13 np0005485008 nova_compute[192512]: 2025-10-13 15:53:13.365 2 DEBUG nova.network.neutron [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Port a0c10d01-f73e-4b3d-b306-a905f1ab35e0 updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Oct 13 11:53:13 np0005485008 nova_compute[192512]: 2025-10-13 15:53:13.369 2 DEBUG nova.compute.manager [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwc6n_c2m',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='02608ecf-b689-40e6-b30e-84dbb0884e27',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Oct 13 11:53:13 np0005485008 kernel: tapa0c10d01-f7: entered promiscuous mode
Oct 13 11:53:13 np0005485008 NetworkManager[51587]: <info>  [1760370793.6579] manager: (tapa0c10d01-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Oct 13 11:53:13 np0005485008 nova_compute[192512]: 2025-10-13 15:53:13.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:13 np0005485008 ovn_controller[94758]: 2025-10-13T15:53:13Z|00135|binding|INFO|Claiming lport a0c10d01-f73e-4b3d-b306-a905f1ab35e0 for this additional chassis.
Oct 13 11:53:13 np0005485008 ovn_controller[94758]: 2025-10-13T15:53:13Z|00136|binding|INFO|a0c10d01-f73e-4b3d-b306-a905f1ab35e0: Claiming fa:16:3e:42:74:74 10.100.0.8
Oct 13 11:53:13 np0005485008 nova_compute[192512]: 2025-10-13 15:53:13.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:13 np0005485008 ovn_controller[94758]: 2025-10-13T15:53:13Z|00137|binding|INFO|Setting lport a0c10d01-f73e-4b3d-b306-a905f1ab35e0 ovn-installed in OVS
Oct 13 11:53:13 np0005485008 nova_compute[192512]: 2025-10-13 15:53:13.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:13 np0005485008 systemd-udevd[219180]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:53:13 np0005485008 systemd-machined[152551]: New machine qemu-11-instance-0000000b.
Oct 13 11:53:13 np0005485008 NetworkManager[51587]: <info>  [1760370793.7082] device (tapa0c10d01-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 11:53:13 np0005485008 NetworkManager[51587]: <info>  [1760370793.7097] device (tapa0c10d01-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 11:53:13 np0005485008 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Oct 13 11:53:14 np0005485008 nova_compute[192512]: 2025-10-13 15:53:14.787 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370794.7867506, 02608ecf-b689-40e6-b30e-84dbb0884e27 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:53:14 np0005485008 nova_compute[192512]: 2025-10-13 15:53:14.788 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] VM Started (Lifecycle Event)#033[00m
Oct 13 11:53:14 np0005485008 nova_compute[192512]: 2025-10-13 15:53:14.812 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:53:15 np0005485008 nova_compute[192512]: 2025-10-13 15:53:15.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:53:15 np0005485008 nova_compute[192512]: 2025-10-13 15:53:15.525 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370795.5245883, 02608ecf-b689-40e6-b30e-84dbb0884e27 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:53:15 np0005485008 nova_compute[192512]: 2025-10-13 15:53:15.527 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] VM Resumed (Lifecycle Event)#033[00m
Oct 13 11:53:15 np0005485008 nova_compute[192512]: 2025-10-13 15:53:15.549 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:53:15 np0005485008 nova_compute[192512]: 2025-10-13 15:53:15.553 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 11:53:15 np0005485008 nova_compute[192512]: 2025-10-13 15:53:15.574 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct 13 11:53:15 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:15.988 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:53:16 np0005485008 nova_compute[192512]: 2025-10-13 15:53:16.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:16 np0005485008 nova_compute[192512]: 2025-10-13 15:53:16.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:53:16 np0005485008 nova_compute[192512]: 2025-10-13 15:53:16.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 11:53:16 np0005485008 nova_compute[192512]: 2025-10-13 15:53:16.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 11:53:16 np0005485008 nova_compute[192512]: 2025-10-13 15:53:16.651 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "refresh_cache-34caffd6-3092-4933-b1c0-39f0cd6da2b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:53:16 np0005485008 nova_compute[192512]: 2025-10-13 15:53:16.652 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquired lock "refresh_cache-34caffd6-3092-4933-b1c0-39f0cd6da2b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:53:16 np0005485008 nova_compute[192512]: 2025-10-13 15:53:16.652 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 13 11:53:16 np0005485008 nova_compute[192512]: 2025-10-13 15:53:16.653 2 DEBUG nova.objects.instance [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 34caffd6-3092-4933-b1c0-39f0cd6da2b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:53:17 np0005485008 nova_compute[192512]: 2025-10-13 15:53:17.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:17 np0005485008 ovn_controller[94758]: 2025-10-13T15:53:17Z|00138|binding|INFO|Claiming lport a0c10d01-f73e-4b3d-b306-a905f1ab35e0 for this chassis.
Oct 13 11:53:17 np0005485008 ovn_controller[94758]: 2025-10-13T15:53:17Z|00139|binding|INFO|a0c10d01-f73e-4b3d-b306-a905f1ab35e0: Claiming fa:16:3e:42:74:74 10.100.0.8
Oct 13 11:53:17 np0005485008 ovn_controller[94758]: 2025-10-13T15:53:17Z|00140|binding|INFO|Setting lport a0c10d01-f73e-4b3d-b306-a905f1ab35e0 up in Southbound
Oct 13 11:53:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:17.550 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:74:74 10.100.0.8'], port_security=['fa:16:3e:42:74:74 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '02608ecf-b689-40e6-b30e-84dbb0884e27', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d42867337c94506bc652a0e84c5f849', 'neutron:revision_number': '11', 'neutron:security_group_ids': '58c21b61-d3dc-4c91-8143-e19c227ff89f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97a3a12d-8a89-43e2-9a6d-7c3bdc21ad02, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=a0c10d01-f73e-4b3d-b306-a905f1ab35e0) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:53:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:17.552 103642 INFO neutron.agent.ovn.metadata.agent [-] Port a0c10d01-f73e-4b3d-b306-a905f1ab35e0 in datapath ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6 bound to our chassis#033[00m
Oct 13 11:53:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:17.553 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6#033[00m
Oct 13 11:53:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:17.572 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[edd49889-9e08-46b6-bda8-b90cd5cfa946]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:53:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:17.607 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[2fc267b1-d594-4228-8dc8-c6eeb90412ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:53:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:17.611 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[abaa3fdc-8a18-44b8-bd63-1b94a3038963]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:53:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:17.647 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[5c0d62cd-66a3-4081-9a81-8bdb619ad2ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:53:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:17.670 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc8d2b8-ffc9-4dff-a6b1-416a10f9fad8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped6faec0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:53:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 15, 'tx_packets': 5, 'rx_bytes': 1126, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 15, 'tx_packets': 5, 'rx_bytes': 1126, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427521, 'reachable_time': 28294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 219215, 'error': None, 'target': 'ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:53:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:17.691 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[676c03d4-a333-4f94-b1d1-971ce3006700]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'taped6faec0-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 427535, 'tstamp': 427535}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 219216, 'error': None, 'target': 'ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'taped6faec0-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 427539, 'tstamp': 427539}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 219216, 'error': None, 'target': 'ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:53:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:17.694 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped6faec0-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:53:17 np0005485008 nova_compute[192512]: 2025-10-13 15:53:17.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:17.697 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped6faec0-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:53:17 np0005485008 nova_compute[192512]: 2025-10-13 15:53:17.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:17.698 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:53:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:17.698 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped6faec0-80, col_values=(('external_ids', {'iface-id': '6f3de1b2-b216-4652-b39f-5d38c68f9bbc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:53:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:17.698 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:53:17 np0005485008 nova_compute[192512]: 2025-10-13 15:53:17.698 2 INFO nova.compute.manager [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Post operation of migration started#033[00m
Oct 13 11:53:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:53:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:53:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:53:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:53:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:53:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:53:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:53:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:53:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:53:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:53:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:53:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:53:19 np0005485008 nova_compute[192512]: 2025-10-13 15:53:19.623 2 DEBUG oslo_concurrency.lockutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-02608ecf-b689-40e6-b30e-84dbb0884e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:53:19 np0005485008 nova_compute[192512]: 2025-10-13 15:53:19.624 2 DEBUG oslo_concurrency.lockutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-02608ecf-b689-40e6-b30e-84dbb0884e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:53:19 np0005485008 nova_compute[192512]: 2025-10-13 15:53:19.625 2 DEBUG nova.network.neutron [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 11:53:21 np0005485008 nova_compute[192512]: 2025-10-13 15:53:21.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:21 np0005485008 nova_compute[192512]: 2025-10-13 15:53:21.206 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Updating instance_info_cache with network_info: [{"id": "9b63c028-5482-465a-8767-fef751fd41e7", "address": "fa:16:3e:ba:01:56", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b63c028-54", "ovs_interfaceid": "9b63c028-5482-465a-8767-fef751fd41e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:53:21 np0005485008 nova_compute[192512]: 2025-10-13 15:53:21.252 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Releasing lock "refresh_cache-34caffd6-3092-4933-b1c0-39f0cd6da2b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:53:21 np0005485008 nova_compute[192512]: 2025-10-13 15:53:21.252 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 13 11:53:21 np0005485008 nova_compute[192512]: 2025-10-13 15:53:21.253 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:53:21 np0005485008 nova_compute[192512]: 2025-10-13 15:53:21.276 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:53:21 np0005485008 nova_compute[192512]: 2025-10-13 15:53:21.277 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:53:21 np0005485008 nova_compute[192512]: 2025-10-13 15:53:21.277 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:53:21 np0005485008 nova_compute[192512]: 2025-10-13 15:53:21.277 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:53:21 np0005485008 nova_compute[192512]: 2025-10-13 15:53:21.440 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/02608ecf-b689-40e6-b30e-84dbb0884e27/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:53:21 np0005485008 nova_compute[192512]: 2025-10-13 15:53:21.499 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/02608ecf-b689-40e6-b30e-84dbb0884e27/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:53:21 np0005485008 nova_compute[192512]: 2025-10-13 15:53:21.500 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/02608ecf-b689-40e6-b30e-84dbb0884e27/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:53:21 np0005485008 nova_compute[192512]: 2025-10-13 15:53:21.559 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/02608ecf-b689-40e6-b30e-84dbb0884e27/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:53:21 np0005485008 nova_compute[192512]: 2025-10-13 15:53:21.564 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/34caffd6-3092-4933-b1c0-39f0cd6da2b2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:53:21 np0005485008 nova_compute[192512]: 2025-10-13 15:53:21.627 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/34caffd6-3092-4933-b1c0-39f0cd6da2b2/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:53:21 np0005485008 nova_compute[192512]: 2025-10-13 15:53:21.628 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/34caffd6-3092-4933-b1c0-39f0cd6da2b2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:53:21 np0005485008 nova_compute[192512]: 2025-10-13 15:53:21.684 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/34caffd6-3092-4933-b1c0-39f0cd6da2b2/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:53:21 np0005485008 nova_compute[192512]: 2025-10-13 15:53:21.740 2 DEBUG nova.network.neutron [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Updating instance_info_cache with network_info: [{"id": "a0c10d01-f73e-4b3d-b306-a905f1ab35e0", "address": "fa:16:3e:42:74:74", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0c10d01-f7", "ovs_interfaceid": "a0c10d01-f73e-4b3d-b306-a905f1ab35e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:53:21 np0005485008 nova_compute[192512]: 2025-10-13 15:53:21.858 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:53:21 np0005485008 nova_compute[192512]: 2025-10-13 15:53:21.860 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5541MB free_disk=73.4078369140625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:53:21 np0005485008 nova_compute[192512]: 2025-10-13 15:53:21.860 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:53:21 np0005485008 nova_compute[192512]: 2025-10-13 15:53:21.861 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:53:21 np0005485008 nova_compute[192512]: 2025-10-13 15:53:21.938 2 DEBUG oslo_concurrency.lockutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-02608ecf-b689-40e6-b30e-84dbb0884e27" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:53:22 np0005485008 nova_compute[192512]: 2025-10-13 15:53:22.026 2 DEBUG oslo_concurrency.lockutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:53:22 np0005485008 nova_compute[192512]: 2025-10-13 15:53:22.260 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Migration for instance 02608ecf-b689-40e6-b30e-84dbb0884e27 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Oct 13 11:53:22 np0005485008 nova_compute[192512]: 2025-10-13 15:53:22.295 2 INFO nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Updating resource usage from migration fb4f145a-c5e2-450e-aa09-3a91bb585240#033[00m
Oct 13 11:53:22 np0005485008 nova_compute[192512]: 2025-10-13 15:53:22.295 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Starting to track incoming migration fb4f145a-c5e2-450e-aa09-3a91bb585240 with flavor ddff5c88-cac9-460e-8ffb-1e9b9a7c2a59 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Oct 13 11:53:22 np0005485008 nova_compute[192512]: 2025-10-13 15:53:22.361 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance 34caffd6-3092-4933-b1c0-39f0cd6da2b2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 13 11:53:22 np0005485008 nova_compute[192512]: 2025-10-13 15:53:22.398 2 WARNING nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance 02608ecf-b689-40e6-b30e-84dbb0884e27 has been moved to another host compute-0.ctlplane.example.com(compute-0.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.#033[00m
Oct 13 11:53:22 np0005485008 nova_compute[192512]: 2025-10-13 15:53:22.399 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:53:22 np0005485008 nova_compute[192512]: 2025-10-13 15:53:22.399 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:53:22 np0005485008 nova_compute[192512]: 2025-10-13 15:53:22.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:22 np0005485008 nova_compute[192512]: 2025-10-13 15:53:22.431 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing inventories for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 13 11:53:22 np0005485008 nova_compute[192512]: 2025-10-13 15:53:22.450 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating ProviderTree inventory for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 13 11:53:22 np0005485008 nova_compute[192512]: 2025-10-13 15:53:22.450 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating inventory in ProviderTree for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 13 11:53:22 np0005485008 nova_compute[192512]: 2025-10-13 15:53:22.471 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing aggregate associations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 13 11:53:22 np0005485008 nova_compute[192512]: 2025-10-13 15:53:22.497 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing trait associations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce, traits: HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,HW_CPU_X86_SSE42,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,HW_CPU_X86_BMI2,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 13 11:53:22 np0005485008 nova_compute[192512]: 2025-10-13 15:53:22.588 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:53:22 np0005485008 nova_compute[192512]: 2025-10-13 15:53:22.621 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:53:22 np0005485008 nova_compute[192512]: 2025-10-13 15:53:22.643 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 11:53:22 np0005485008 nova_compute[192512]: 2025-10-13 15:53:22.643 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:53:22 np0005485008 nova_compute[192512]: 2025-10-13 15:53:22.644 2 DEBUG oslo_concurrency.lockutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:53:22 np0005485008 nova_compute[192512]: 2025-10-13 15:53:22.644 2 DEBUG oslo_concurrency.lockutils [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:53:22 np0005485008 nova_compute[192512]: 2025-10-13 15:53:22.649 2 INFO nova.virt.libvirt.driver [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Oct 13 11:53:22 np0005485008 virtqemud[192082]: Domain id=11 name='instance-0000000b' uuid=02608ecf-b689-40e6-b30e-84dbb0884e27 is tainted: custom-monitor
Oct 13 11:53:23 np0005485008 nova_compute[192512]: 2025-10-13 15:53:23.656 2 INFO nova.virt.libvirt.driver [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Oct 13 11:53:24 np0005485008 nova_compute[192512]: 2025-10-13 15:53:24.640 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:53:24 np0005485008 nova_compute[192512]: 2025-10-13 15:53:24.662 2 INFO nova.virt.libvirt.driver [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Oct 13 11:53:24 np0005485008 nova_compute[192512]: 2025-10-13 15:53:24.667 2 DEBUG nova.compute.manager [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:53:24 np0005485008 nova_compute[192512]: 2025-10-13 15:53:24.714 2 DEBUG nova.objects.instance [None req-ae41ebe7-33c8-4864-8bc4-5ff165de618a f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 13 11:53:25 np0005485008 podman[219230]: 2025-10-13 15:53:25.783318895 +0000 UTC m=+0.069040597 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.buildah.version=1.33.7, version=9.6, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc.)
Oct 13 11:53:26 np0005485008 nova_compute[192512]: 2025-10-13 15:53:26.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:27 np0005485008 nova_compute[192512]: 2025-10-13 15:53:27.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:30 np0005485008 nova_compute[192512]: 2025-10-13 15:53:30.914 2 DEBUG oslo_concurrency.lockutils [None req-dafbccfb-49b2-4d5c-bad2-3690fa07a106 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "34caffd6-3092-4933-b1c0-39f0cd6da2b2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:53:30 np0005485008 nova_compute[192512]: 2025-10-13 15:53:30.915 2 DEBUG oslo_concurrency.lockutils [None req-dafbccfb-49b2-4d5c-bad2-3690fa07a106 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "34caffd6-3092-4933-b1c0-39f0cd6da2b2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:53:30 np0005485008 nova_compute[192512]: 2025-10-13 15:53:30.915 2 DEBUG oslo_concurrency.lockutils [None req-dafbccfb-49b2-4d5c-bad2-3690fa07a106 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "34caffd6-3092-4933-b1c0-39f0cd6da2b2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:53:30 np0005485008 nova_compute[192512]: 2025-10-13 15:53:30.916 2 DEBUG oslo_concurrency.lockutils [None req-dafbccfb-49b2-4d5c-bad2-3690fa07a106 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "34caffd6-3092-4933-b1c0-39f0cd6da2b2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:53:30 np0005485008 nova_compute[192512]: 2025-10-13 15:53:30.916 2 DEBUG oslo_concurrency.lockutils [None req-dafbccfb-49b2-4d5c-bad2-3690fa07a106 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "34caffd6-3092-4933-b1c0-39f0cd6da2b2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:53:30 np0005485008 nova_compute[192512]: 2025-10-13 15:53:30.917 2 INFO nova.compute.manager [None req-dafbccfb-49b2-4d5c-bad2-3690fa07a106 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Terminating instance#033[00m
Oct 13 11:53:30 np0005485008 nova_compute[192512]: 2025-10-13 15:53:30.918 2 DEBUG nova.compute.manager [None req-dafbccfb-49b2-4d5c-bad2-3690fa07a106 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 13 11:53:30 np0005485008 kernel: tap9b63c028-54 (unregistering): left promiscuous mode
Oct 13 11:53:30 np0005485008 NetworkManager[51587]: <info>  [1760370810.9493] device (tap9b63c028-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 11:53:30 np0005485008 ovn_controller[94758]: 2025-10-13T15:53:30Z|00141|binding|INFO|Releasing lport 9b63c028-5482-465a-8767-fef751fd41e7 from this chassis (sb_readonly=0)
Oct 13 11:53:30 np0005485008 ovn_controller[94758]: 2025-10-13T15:53:30Z|00142|binding|INFO|Setting lport 9b63c028-5482-465a-8767-fef751fd41e7 down in Southbound
Oct 13 11:53:30 np0005485008 nova_compute[192512]: 2025-10-13 15:53:30.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:30 np0005485008 ovn_controller[94758]: 2025-10-13T15:53:30Z|00143|binding|INFO|Removing iface tap9b63c028-54 ovn-installed in OVS
Oct 13 11:53:30 np0005485008 nova_compute[192512]: 2025-10-13 15:53:30.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:30 np0005485008 nova_compute[192512]: 2025-10-13 15:53:30.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:30.982 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:01:56 10.100.0.3'], port_security=['fa:16:3e:ba:01:56 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '34caffd6-3092-4933-b1c0-39f0cd6da2b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d42867337c94506bc652a0e84c5f849', 'neutron:revision_number': '4', 'neutron:security_group_ids': '58c21b61-d3dc-4c91-8143-e19c227ff89f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97a3a12d-8a89-43e2-9a6d-7c3bdc21ad02, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=9b63c028-5482-465a-8767-fef751fd41e7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:53:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:30.983 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 9b63c028-5482-465a-8767-fef751fd41e7 in datapath ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6 unbound from our chassis#033[00m
Oct 13 11:53:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:30.985 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6#033[00m
Oct 13 11:53:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:31.005 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[25cba0d0-c818-4cc4-b616-0a7f0fbeec49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:53:31 np0005485008 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Oct 13 11:53:31 np0005485008 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000c.scope: Consumed 14.661s CPU time.
Oct 13 11:53:31 np0005485008 systemd-machined[152551]: Machine qemu-10-instance-0000000c terminated.
Oct 13 11:53:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:31.040 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[ee19f8d2-d7c7-4348-a911-27eccc2c1fd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:53:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:31.047 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[26dcbbbb-806b-4999-8a1d-0faeae685586]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:53:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:31.080 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[ac3b7a17-b2dc-4053-a11c-57b08ff1c9c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:53:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:31.101 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[16c4cb22-426c-4073-9d91-cebe7e7f987c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped6faec0-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:53:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427521, 'reachable_time': 28294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 219264, 'error': None, 'target': 'ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:53:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:31.119 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[c19b8d09-9485-4fd4-bb33-044e70378ffa]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'taped6faec0-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 427535, 'tstamp': 427535}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 219265, 'error': None, 'target': 'ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'taped6faec0-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 427539, 'tstamp': 427539}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 219265, 'error': None, 'target': 'ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:53:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:31.122 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped6faec0-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:31.130 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped6faec0-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:53:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:31.130 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:53:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:31.131 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped6faec0-80, col_values=(('external_ids', {'iface-id': '6f3de1b2-b216-4652-b39f-5d38c68f9bbc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:53:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:31.131 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.192 2 INFO nova.virt.libvirt.driver [-] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Instance destroyed successfully.#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.192 2 DEBUG nova.objects.instance [None req-dafbccfb-49b2-4d5c-bad2-3690fa07a106 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lazy-loading 'resources' on Instance uuid 34caffd6-3092-4933-b1c0-39f0cd6da2b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.205 2 DEBUG nova.virt.libvirt.vif [None req-dafbccfb-49b2-4d5c-bad2-3690fa07a106 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T15:52:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-273267175',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-273267175',id=12,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T15:52:18Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4d42867337c94506bc652a0e84c5f849',ramdisk_id='',reservation_id='r-t82fjv20',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-870116793',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-870116793-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T15:52:18Z,user_data=None,user_id='c560a06879cb4d4a861db9e49a3f22ee',uuid=34caffd6-3092-4933-b1c0-39f0cd6da2b2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9b63c028-5482-465a-8767-fef751fd41e7", "address": "fa:16:3e:ba:01:56", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b63c028-54", "ovs_interfaceid": "9b63c028-5482-465a-8767-fef751fd41e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.206 2 DEBUG nova.network.os_vif_util [None req-dafbccfb-49b2-4d5c-bad2-3690fa07a106 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Converting VIF {"id": "9b63c028-5482-465a-8767-fef751fd41e7", "address": "fa:16:3e:ba:01:56", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b63c028-54", "ovs_interfaceid": "9b63c028-5482-465a-8767-fef751fd41e7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.207 2 DEBUG nova.network.os_vif_util [None req-dafbccfb-49b2-4d5c-bad2-3690fa07a106 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:01:56,bridge_name='br-int',has_traffic_filtering=True,id=9b63c028-5482-465a-8767-fef751fd41e7,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b63c028-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.207 2 DEBUG os_vif [None req-dafbccfb-49b2-4d5c-bad2-3690fa07a106 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:01:56,bridge_name='br-int',has_traffic_filtering=True,id=9b63c028-5482-465a-8767-fef751fd41e7,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b63c028-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.209 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b63c028-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.216 2 INFO os_vif [None req-dafbccfb-49b2-4d5c-bad2-3690fa07a106 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:01:56,bridge_name='br-int',has_traffic_filtering=True,id=9b63c028-5482-465a-8767-fef751fd41e7,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b63c028-54')#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.216 2 INFO nova.virt.libvirt.driver [None req-dafbccfb-49b2-4d5c-bad2-3690fa07a106 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Deleting instance files /var/lib/nova/instances/34caffd6-3092-4933-b1c0-39f0cd6da2b2_del#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.217 2 INFO nova.virt.libvirt.driver [None req-dafbccfb-49b2-4d5c-bad2-3690fa07a106 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Deletion of /var/lib/nova/instances/34caffd6-3092-4933-b1c0-39f0cd6da2b2_del complete#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.327 2 DEBUG nova.compute.manager [req-679a3792-5369-45dd-a26f-f1d441432880 req-91908317-a72a-4cb6-8408-110b45b3d96f 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Received event network-vif-unplugged-9b63c028-5482-465a-8767-fef751fd41e7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.328 2 DEBUG oslo_concurrency.lockutils [req-679a3792-5369-45dd-a26f-f1d441432880 req-91908317-a72a-4cb6-8408-110b45b3d96f 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "34caffd6-3092-4933-b1c0-39f0cd6da2b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.328 2 DEBUG oslo_concurrency.lockutils [req-679a3792-5369-45dd-a26f-f1d441432880 req-91908317-a72a-4cb6-8408-110b45b3d96f 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "34caffd6-3092-4933-b1c0-39f0cd6da2b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.329 2 DEBUG oslo_concurrency.lockutils [req-679a3792-5369-45dd-a26f-f1d441432880 req-91908317-a72a-4cb6-8408-110b45b3d96f 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "34caffd6-3092-4933-b1c0-39f0cd6da2b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.329 2 DEBUG nova.compute.manager [req-679a3792-5369-45dd-a26f-f1d441432880 req-91908317-a72a-4cb6-8408-110b45b3d96f 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] No waiting events found dispatching network-vif-unplugged-9b63c028-5482-465a-8767-fef751fd41e7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.329 2 DEBUG nova.compute.manager [req-679a3792-5369-45dd-a26f-f1d441432880 req-91908317-a72a-4cb6-8408-110b45b3d96f 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Received event network-vif-unplugged-9b63c028-5482-465a-8767-fef751fd41e7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.330 2 INFO nova.compute.manager [None req-dafbccfb-49b2-4d5c-bad2-3690fa07a106 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Took 0.41 seconds to destroy the instance on the hypervisor.#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.330 2 DEBUG oslo.service.loopingcall [None req-dafbccfb-49b2-4d5c-bad2-3690fa07a106 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.330 2 DEBUG nova.compute.manager [-] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.331 2 DEBUG nova.network.neutron [-] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.848 2 DEBUG nova.network.neutron [-] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.876 2 INFO nova.compute.manager [-] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Took 0.54 seconds to deallocate network for instance.#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.932 2 DEBUG oslo_concurrency.lockutils [None req-dafbccfb-49b2-4d5c-bad2-3690fa07a106 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.933 2 DEBUG oslo_concurrency.lockutils [None req-dafbccfb-49b2-4d5c-bad2-3690fa07a106 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:53:31 np0005485008 nova_compute[192512]: 2025-10-13 15:53:31.960 2 DEBUG nova.compute.manager [req-7de38ca6-00b6-4541-bf55-c15750426ec5 req-5421f6fe-4047-482a-9f86-9e9e80bce5c4 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Received event network-vif-deleted-9b63c028-5482-465a-8767-fef751fd41e7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:53:32 np0005485008 nova_compute[192512]: 2025-10-13 15:53:32.025 2 DEBUG nova.compute.provider_tree [None req-dafbccfb-49b2-4d5c-bad2-3690fa07a106 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:53:32 np0005485008 nova_compute[192512]: 2025-10-13 15:53:32.041 2 DEBUG nova.scheduler.client.report [None req-dafbccfb-49b2-4d5c-bad2-3690fa07a106 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:53:32 np0005485008 nova_compute[192512]: 2025-10-13 15:53:32.072 2 DEBUG oslo_concurrency.lockutils [None req-dafbccfb-49b2-4d5c-bad2-3690fa07a106 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:53:32 np0005485008 nova_compute[192512]: 2025-10-13 15:53:32.095 2 INFO nova.scheduler.client.report [None req-dafbccfb-49b2-4d5c-bad2-3690fa07a106 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Deleted allocations for instance 34caffd6-3092-4933-b1c0-39f0cd6da2b2#033[00m
Oct 13 11:53:32 np0005485008 nova_compute[192512]: 2025-10-13 15:53:32.152 2 DEBUG oslo_concurrency.lockutils [None req-dafbccfb-49b2-4d5c-bad2-3690fa07a106 c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "34caffd6-3092-4933-b1c0-39f0cd6da2b2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:53:32 np0005485008 nova_compute[192512]: 2025-10-13 15:53:32.908 2 DEBUG oslo_concurrency.lockutils [None req-fdf14336-0c12-4602-a77b-adc55c48d6ca c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "02608ecf-b689-40e6-b30e-84dbb0884e27" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:53:32 np0005485008 nova_compute[192512]: 2025-10-13 15:53:32.909 2 DEBUG oslo_concurrency.lockutils [None req-fdf14336-0c12-4602-a77b-adc55c48d6ca c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "02608ecf-b689-40e6-b30e-84dbb0884e27" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:53:32 np0005485008 nova_compute[192512]: 2025-10-13 15:53:32.909 2 DEBUG oslo_concurrency.lockutils [None req-fdf14336-0c12-4602-a77b-adc55c48d6ca c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "02608ecf-b689-40e6-b30e-84dbb0884e27-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:53:32 np0005485008 nova_compute[192512]: 2025-10-13 15:53:32.909 2 DEBUG oslo_concurrency.lockutils [None req-fdf14336-0c12-4602-a77b-adc55c48d6ca c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "02608ecf-b689-40e6-b30e-84dbb0884e27-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:53:32 np0005485008 nova_compute[192512]: 2025-10-13 15:53:32.909 2 DEBUG oslo_concurrency.lockutils [None req-fdf14336-0c12-4602-a77b-adc55c48d6ca c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "02608ecf-b689-40e6-b30e-84dbb0884e27-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:53:32 np0005485008 nova_compute[192512]: 2025-10-13 15:53:32.910 2 INFO nova.compute.manager [None req-fdf14336-0c12-4602-a77b-adc55c48d6ca c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Terminating instance#033[00m
Oct 13 11:53:32 np0005485008 nova_compute[192512]: 2025-10-13 15:53:32.911 2 DEBUG nova.compute.manager [None req-fdf14336-0c12-4602-a77b-adc55c48d6ca c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 13 11:53:32 np0005485008 kernel: tapa0c10d01-f7 (unregistering): left promiscuous mode
Oct 13 11:53:32 np0005485008 NetworkManager[51587]: <info>  [1760370812.9418] device (tapa0c10d01-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 11:53:32 np0005485008 nova_compute[192512]: 2025-10-13 15:53:32.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:32 np0005485008 ovn_controller[94758]: 2025-10-13T15:53:32Z|00144|binding|INFO|Releasing lport a0c10d01-f73e-4b3d-b306-a905f1ab35e0 from this chassis (sb_readonly=0)
Oct 13 11:53:32 np0005485008 ovn_controller[94758]: 2025-10-13T15:53:32Z|00145|binding|INFO|Setting lport a0c10d01-f73e-4b3d-b306-a905f1ab35e0 down in Southbound
Oct 13 11:53:32 np0005485008 ovn_controller[94758]: 2025-10-13T15:53:32Z|00146|binding|INFO|Removing iface tapa0c10d01-f7 ovn-installed in OVS
Oct 13 11:53:32 np0005485008 nova_compute[192512]: 2025-10-13 15:53:32.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:32 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:32.971 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:74:74 10.100.0.8'], port_security=['fa:16:3e:42:74:74 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '02608ecf-b689-40e6-b30e-84dbb0884e27', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d42867337c94506bc652a0e84c5f849', 'neutron:revision_number': '13', 'neutron:security_group_ids': '58c21b61-d3dc-4c91-8143-e19c227ff89f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97a3a12d-8a89-43e2-9a6d-7c3bdc21ad02, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=a0c10d01-f73e-4b3d-b306-a905f1ab35e0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:53:32 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:32.973 103642 INFO neutron.agent.ovn.metadata.agent [-] Port a0c10d01-f73e-4b3d-b306-a905f1ab35e0 in datapath ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6 unbound from our chassis#033[00m
Oct 13 11:53:32 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:32.973 103642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 13 11:53:32 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:32.975 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[ea119bcd-1b5d-4eea-9215-7e4a428e1ca1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:53:32 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:32.975 103642 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6 namespace which is not needed anymore#033[00m
Oct 13 11:53:32 np0005485008 nova_compute[192512]: 2025-10-13 15:53:32.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:33 np0005485008 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Oct 13 11:53:33 np0005485008 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 2.271s CPU time.
Oct 13 11:53:33 np0005485008 systemd-machined[152551]: Machine qemu-11-instance-0000000b terminated.
Oct 13 11:53:33 np0005485008 neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6[218861]: [NOTICE]   (218865) : haproxy version is 2.8.14-c23fe91
Oct 13 11:53:33 np0005485008 neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6[218861]: [NOTICE]   (218865) : path to executable is /usr/sbin/haproxy
Oct 13 11:53:33 np0005485008 neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6[218861]: [WARNING]  (218865) : Exiting Master process...
Oct 13 11:53:33 np0005485008 neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6[218861]: [ALERT]    (218865) : Current worker (218867) exited with code 143 (Terminated)
Oct 13 11:53:33 np0005485008 neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6[218861]: [WARNING]  (218865) : All workers exited. Exiting... (0)
Oct 13 11:53:33 np0005485008 systemd[1]: libpod-9c5f88e3c7b4b5a8639356c74a33e66ad94a801637da03cd2dca43b0925f2376.scope: Deactivated successfully.
Oct 13 11:53:33 np0005485008 podman[219306]: 2025-10-13 15:53:33.125790166 +0000 UTC m=+0.049116735 container died 9c5f88e3c7b4b5a8639356c74a33e66ad94a801637da03cd2dca43b0925f2376 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 11:53:33 np0005485008 NetworkManager[51587]: <info>  [1760370813.1310] manager: (tapa0c10d01-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:33 np0005485008 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9c5f88e3c7b4b5a8639356c74a33e66ad94a801637da03cd2dca43b0925f2376-userdata-shm.mount: Deactivated successfully.
Oct 13 11:53:33 np0005485008 systemd[1]: var-lib-containers-storage-overlay-3b0bef52106b9f76555ca99aa2daab8c3df62f1d63055c7ebe5dc2f38e5e022f-merged.mount: Deactivated successfully.
Oct 13 11:53:33 np0005485008 podman[219306]: 2025-10-13 15:53:33.173387132 +0000 UTC m=+0.096713701 container cleanup 9c5f88e3c7b4b5a8639356c74a33e66ad94a801637da03cd2dca43b0925f2376 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.187 2 INFO nova.virt.libvirt.driver [-] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Instance destroyed successfully.#033[00m
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.188 2 DEBUG nova.objects.instance [None req-fdf14336-0c12-4602-a77b-adc55c48d6ca c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lazy-loading 'resources' on Instance uuid 02608ecf-b689-40e6-b30e-84dbb0884e27 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:53:33 np0005485008 systemd[1]: libpod-conmon-9c5f88e3c7b4b5a8639356c74a33e66ad94a801637da03cd2dca43b0925f2376.scope: Deactivated successfully.
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.201 2 DEBUG nova.virt.libvirt.vif [None req-fdf14336-0c12-4602-a77b-adc55c48d6ca c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-13T15:51:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteHostMaintenanceStrategy-server-1956224104',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutehostmaintenancestrategy-server-1956224104',id=11,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T15:52:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4d42867337c94506bc652a0e84c5f849',ramdisk_id='',reservation_id='r-qkclbo34',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',clean_attempts='1',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteHostMaintenanceStrategy-870116793',owner_user_name='tempest-TestExecuteHostMaintenanceStrategy-870116793-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T15:53:24Z,user_data=None,user_id='c560a06879cb4d4a861db9e49a3f22ee',uuid=02608ecf-b689-40e6-b30e-84dbb0884e27,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a0c10d01-f73e-4b3d-b306-a905f1ab35e0", "address": "fa:16:3e:42:74:74", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0c10d01-f7", "ovs_interfaceid": "a0c10d01-f73e-4b3d-b306-a905f1ab35e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.202 2 DEBUG nova.network.os_vif_util [None req-fdf14336-0c12-4602-a77b-adc55c48d6ca c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Converting VIF {"id": "a0c10d01-f73e-4b3d-b306-a905f1ab35e0", "address": "fa:16:3e:42:74:74", "network": {"id": "ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6", "bridge": "br-int", "label": "tempest-TestExecuteHostMaintenanceStrategy-2088463285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "272e6b4c08c24a3eb745442ce72e2edd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0c10d01-f7", "ovs_interfaceid": "a0c10d01-f73e-4b3d-b306-a905f1ab35e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.202 2 DEBUG nova.network.os_vif_util [None req-fdf14336-0c12-4602-a77b-adc55c48d6ca c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:42:74:74,bridge_name='br-int',has_traffic_filtering=True,id=a0c10d01-f73e-4b3d-b306-a905f1ab35e0,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0c10d01-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.203 2 DEBUG os_vif [None req-fdf14336-0c12-4602-a77b-adc55c48d6ca c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:74:74,bridge_name='br-int',has_traffic_filtering=True,id=a0c10d01-f73e-4b3d-b306-a905f1ab35e0,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0c10d01-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.206 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0c10d01-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.213 2 INFO os_vif [None req-fdf14336-0c12-4602-a77b-adc55c48d6ca c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:74:74,bridge_name='br-int',has_traffic_filtering=True,id=a0c10d01-f73e-4b3d-b306-a905f1ab35e0,network=Network(ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0c10d01-f7')#033[00m
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.214 2 INFO nova.virt.libvirt.driver [None req-fdf14336-0c12-4602-a77b-adc55c48d6ca c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Deleting instance files /var/lib/nova/instances/02608ecf-b689-40e6-b30e-84dbb0884e27_del#033[00m
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.214 2 INFO nova.virt.libvirt.driver [None req-fdf14336-0c12-4602-a77b-adc55c48d6ca c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Deletion of /var/lib/nova/instances/02608ecf-b689-40e6-b30e-84dbb0884e27_del complete#033[00m
Oct 13 11:53:33 np0005485008 podman[219353]: 2025-10-13 15:53:33.246518705 +0000 UTC m=+0.046210243 container remove 9c5f88e3c7b4b5a8639356c74a33e66ad94a801637da03cd2dca43b0925f2376 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:53:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:33.251 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[9eef990a-6751-444f-bb90-8c5a4aae323a]: (4, ('Mon Oct 13 03:53:33 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6 (9c5f88e3c7b4b5a8639356c74a33e66ad94a801637da03cd2dca43b0925f2376)\n9c5f88e3c7b4b5a8639356c74a33e66ad94a801637da03cd2dca43b0925f2376\nMon Oct 13 03:53:33 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6 (9c5f88e3c7b4b5a8639356c74a33e66ad94a801637da03cd2dca43b0925f2376)\n9c5f88e3c7b4b5a8639356c74a33e66ad94a801637da03cd2dca43b0925f2376\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:53:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:33.253 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c45924-6699-49b0-9162-2052b2ce35c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:53:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:33.254 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped6faec0-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:53:33 np0005485008 kernel: taped6faec0-80: left promiscuous mode
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:33.260 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[47421235-a703-4c8d-8c0e-fa13e7be3859]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.278 2 INFO nova.compute.manager [None req-fdf14336-0c12-4602-a77b-adc55c48d6ca c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Took 0.37 seconds to destroy the instance on the hypervisor.#033[00m
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.278 2 DEBUG oslo.service.loopingcall [None req-fdf14336-0c12-4602-a77b-adc55c48d6ca c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.279 2 DEBUG nova.compute.manager [-] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.279 2 DEBUG nova.network.neutron [-] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 13 11:53:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:33.292 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb6d410-06a3-4922-8d0e-3ec240f3f273]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:53:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:33.293 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[00d826fa-1ab0-4b7c-ba46-0b30489069b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:53:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:33.310 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc2f8fe-5682-41c1-91a3-89c465ce9fae]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427513, 'reachable_time': 38715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 219368, 'error': None, 'target': 'ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:53:33 np0005485008 systemd[1]: run-netns-ovnmeta\x2ded6faec0\x2d8362\x2d4a4b\x2da5bb\x2dd4f7a3a596b6.mount: Deactivated successfully.
Oct 13 11:53:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:33.313 103757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ed6faec0-8362-4a4b-a5bb-d4f7a3a596b6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 13 11:53:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:33.314 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[4a0943e5-4f54-44ff-903f-e6c16680e31a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.444 2 DEBUG nova.compute.manager [req-7ef09e17-e6e1-4f6a-a32e-6fc41327a564 req-e49116f9-f156-4439-a024-b97d5006476c 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Received event network-vif-plugged-9b63c028-5482-465a-8767-fef751fd41e7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.444 2 DEBUG oslo_concurrency.lockutils [req-7ef09e17-e6e1-4f6a-a32e-6fc41327a564 req-e49116f9-f156-4439-a024-b97d5006476c 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "34caffd6-3092-4933-b1c0-39f0cd6da2b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.445 2 DEBUG oslo_concurrency.lockutils [req-7ef09e17-e6e1-4f6a-a32e-6fc41327a564 req-e49116f9-f156-4439-a024-b97d5006476c 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "34caffd6-3092-4933-b1c0-39f0cd6da2b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.445 2 DEBUG oslo_concurrency.lockutils [req-7ef09e17-e6e1-4f6a-a32e-6fc41327a564 req-e49116f9-f156-4439-a024-b97d5006476c 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "34caffd6-3092-4933-b1c0-39f0cd6da2b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.445 2 DEBUG nova.compute.manager [req-7ef09e17-e6e1-4f6a-a32e-6fc41327a564 req-e49116f9-f156-4439-a024-b97d5006476c 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] No waiting events found dispatching network-vif-plugged-9b63c028-5482-465a-8767-fef751fd41e7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:53:33 np0005485008 nova_compute[192512]: 2025-10-13 15:53:33.445 2 WARNING nova.compute.manager [req-7ef09e17-e6e1-4f6a-a32e-6fc41327a564 req-e49116f9-f156-4439-a024-b97d5006476c 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Received unexpected event network-vif-plugged-9b63c028-5482-465a-8767-fef751fd41e7 for instance with vm_state deleted and task_state None.#033[00m
Oct 13 11:53:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:33.956 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:53:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:33.956 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:53:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:53:33.957 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:53:34 np0005485008 nova_compute[192512]: 2025-10-13 15:53:34.108 2 DEBUG nova.compute.manager [req-091d48f0-556d-4e47-92fe-9c933b796af9 req-1c3cc5f5-06d7-45f7-bbfc-bcd95e68622c 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Received event network-vif-unplugged-a0c10d01-f73e-4b3d-b306-a905f1ab35e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:53:34 np0005485008 nova_compute[192512]: 2025-10-13 15:53:34.109 2 DEBUG oslo_concurrency.lockutils [req-091d48f0-556d-4e47-92fe-9c933b796af9 req-1c3cc5f5-06d7-45f7-bbfc-bcd95e68622c 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "02608ecf-b689-40e6-b30e-84dbb0884e27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:53:34 np0005485008 nova_compute[192512]: 2025-10-13 15:53:34.109 2 DEBUG oslo_concurrency.lockutils [req-091d48f0-556d-4e47-92fe-9c933b796af9 req-1c3cc5f5-06d7-45f7-bbfc-bcd95e68622c 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "02608ecf-b689-40e6-b30e-84dbb0884e27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:53:34 np0005485008 nova_compute[192512]: 2025-10-13 15:53:34.109 2 DEBUG oslo_concurrency.lockutils [req-091d48f0-556d-4e47-92fe-9c933b796af9 req-1c3cc5f5-06d7-45f7-bbfc-bcd95e68622c 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "02608ecf-b689-40e6-b30e-84dbb0884e27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:53:34 np0005485008 nova_compute[192512]: 2025-10-13 15:53:34.109 2 DEBUG nova.compute.manager [req-091d48f0-556d-4e47-92fe-9c933b796af9 req-1c3cc5f5-06d7-45f7-bbfc-bcd95e68622c 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] No waiting events found dispatching network-vif-unplugged-a0c10d01-f73e-4b3d-b306-a905f1ab35e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:53:34 np0005485008 nova_compute[192512]: 2025-10-13 15:53:34.109 2 DEBUG nova.compute.manager [req-091d48f0-556d-4e47-92fe-9c933b796af9 req-1c3cc5f5-06d7-45f7-bbfc-bcd95e68622c 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Received event network-vif-unplugged-a0c10d01-f73e-4b3d-b306-a905f1ab35e0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 11:53:34 np0005485008 nova_compute[192512]: 2025-10-13 15:53:34.110 2 DEBUG nova.compute.manager [req-091d48f0-556d-4e47-92fe-9c933b796af9 req-1c3cc5f5-06d7-45f7-bbfc-bcd95e68622c 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Received event network-vif-plugged-a0c10d01-f73e-4b3d-b306-a905f1ab35e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:53:34 np0005485008 nova_compute[192512]: 2025-10-13 15:53:34.110 2 DEBUG oslo_concurrency.lockutils [req-091d48f0-556d-4e47-92fe-9c933b796af9 req-1c3cc5f5-06d7-45f7-bbfc-bcd95e68622c 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "02608ecf-b689-40e6-b30e-84dbb0884e27-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:53:34 np0005485008 nova_compute[192512]: 2025-10-13 15:53:34.110 2 DEBUG oslo_concurrency.lockutils [req-091d48f0-556d-4e47-92fe-9c933b796af9 req-1c3cc5f5-06d7-45f7-bbfc-bcd95e68622c 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "02608ecf-b689-40e6-b30e-84dbb0884e27-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:53:34 np0005485008 nova_compute[192512]: 2025-10-13 15:53:34.110 2 DEBUG oslo_concurrency.lockutils [req-091d48f0-556d-4e47-92fe-9c933b796af9 req-1c3cc5f5-06d7-45f7-bbfc-bcd95e68622c 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "02608ecf-b689-40e6-b30e-84dbb0884e27-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:53:34 np0005485008 nova_compute[192512]: 2025-10-13 15:53:34.110 2 DEBUG nova.compute.manager [req-091d48f0-556d-4e47-92fe-9c933b796af9 req-1c3cc5f5-06d7-45f7-bbfc-bcd95e68622c 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] No waiting events found dispatching network-vif-plugged-a0c10d01-f73e-4b3d-b306-a905f1ab35e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:53:34 np0005485008 nova_compute[192512]: 2025-10-13 15:53:34.111 2 WARNING nova.compute.manager [req-091d48f0-556d-4e47-92fe-9c933b796af9 req-1c3cc5f5-06d7-45f7-bbfc-bcd95e68622c 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Received unexpected event network-vif-plugged-a0c10d01-f73e-4b3d-b306-a905f1ab35e0 for instance with vm_state active and task_state deleting.#033[00m
Oct 13 11:53:34 np0005485008 nova_compute[192512]: 2025-10-13 15:53:34.689 2 DEBUG nova.network.neutron [-] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:53:34 np0005485008 nova_compute[192512]: 2025-10-13 15:53:34.708 2 INFO nova.compute.manager [-] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Took 1.43 seconds to deallocate network for instance.#033[00m
Oct 13 11:53:34 np0005485008 nova_compute[192512]: 2025-10-13 15:53:34.754 2 DEBUG oslo_concurrency.lockutils [None req-fdf14336-0c12-4602-a77b-adc55c48d6ca c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:53:34 np0005485008 nova_compute[192512]: 2025-10-13 15:53:34.755 2 DEBUG oslo_concurrency.lockutils [None req-fdf14336-0c12-4602-a77b-adc55c48d6ca c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:53:34 np0005485008 nova_compute[192512]: 2025-10-13 15:53:34.760 2 DEBUG oslo_concurrency.lockutils [None req-fdf14336-0c12-4602-a77b-adc55c48d6ca c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:53:34 np0005485008 nova_compute[192512]: 2025-10-13 15:53:34.783 2 INFO nova.scheduler.client.report [None req-fdf14336-0c12-4602-a77b-adc55c48d6ca c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Deleted allocations for instance 02608ecf-b689-40e6-b30e-84dbb0884e27#033[00m
Oct 13 11:53:34 np0005485008 nova_compute[192512]: 2025-10-13 15:53:34.851 2 DEBUG oslo_concurrency.lockutils [None req-fdf14336-0c12-4602-a77b-adc55c48d6ca c560a06879cb4d4a861db9e49a3f22ee 4d42867337c94506bc652a0e84c5f849 - - default default] Lock "02608ecf-b689-40e6-b30e-84dbb0884e27" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:53:35 np0005485008 podman[202884]: time="2025-10-13T15:53:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:53:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:53:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:53:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:53:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2997 "" "Go-http-client/1.1"
Oct 13 11:53:36 np0005485008 nova_compute[192512]: 2025-10-13 15:53:36.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:36 np0005485008 nova_compute[192512]: 2025-10-13 15:53:36.182 2 DEBUG nova.compute.manager [req-721fc33b-e625-43ec-97f3-58056665e8c2 req-6ee08f76-b6b7-42de-9fe9-4c6974b97ace 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Received event network-vif-deleted-a0c10d01-f73e-4b3d-b306-a905f1ab35e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:53:38 np0005485008 nova_compute[192512]: 2025-10-13 15:53:38.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:41 np0005485008 nova_compute[192512]: 2025-10-13 15:53:41.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:42 np0005485008 podman[219373]: 2025-10-13 15:53:42.768204376 +0000 UTC m=+0.059810508 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Oct 13 11:53:42 np0005485008 podman[219372]: 2025-10-13 15:53:42.769445185 +0000 UTC m=+0.065116355 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:53:42 np0005485008 podman[219371]: 2025-10-13 15:53:42.775655649 +0000 UTC m=+0.074507068 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Oct 13 11:53:42 np0005485008 podman[219374]: 2025-10-13 15:53:42.775822615 +0000 UTC m=+0.064247888 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 11:53:42 np0005485008 podman[219380]: 2025-10-13 15:53:42.81152632 +0000 UTC m=+0.096151915 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 11:53:43 np0005485008 nova_compute[192512]: 2025-10-13 15:53:43.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:46 np0005485008 nova_compute[192512]: 2025-10-13 15:53:46.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:46 np0005485008 nova_compute[192512]: 2025-10-13 15:53:46.190 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760370811.1890361, 34caffd6-3092-4933-b1c0-39f0cd6da2b2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:53:46 np0005485008 nova_compute[192512]: 2025-10-13 15:53:46.191 2 INFO nova.compute.manager [-] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] VM Stopped (Lifecycle Event)#033[00m
Oct 13 11:53:46 np0005485008 nova_compute[192512]: 2025-10-13 15:53:46.226 2 DEBUG nova.compute.manager [None req-ecf6fe37-d429-4be3-b7ed-597cb2f30cee - - - - - -] [instance: 34caffd6-3092-4933-b1c0-39f0cd6da2b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:53:48 np0005485008 nova_compute[192512]: 2025-10-13 15:53:48.183 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760370813.181925, 02608ecf-b689-40e6-b30e-84dbb0884e27 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:53:48 np0005485008 nova_compute[192512]: 2025-10-13 15:53:48.184 2 INFO nova.compute.manager [-] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] VM Stopped (Lifecycle Event)#033[00m
Oct 13 11:53:48 np0005485008 nova_compute[192512]: 2025-10-13 15:53:48.211 2 DEBUG nova.compute.manager [None req-996d1e2d-e81f-4e93-9fc0-584ae2a501f2 - - - - - -] [instance: 02608ecf-b689-40e6-b30e-84dbb0884e27] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:53:48 np0005485008 nova_compute[192512]: 2025-10-13 15:53:48.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:53:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:53:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:53:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:53:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:53:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:53:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:53:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:53:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:53:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:53:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:53:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:53:51 np0005485008 nova_compute[192512]: 2025-10-13 15:53:51.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:53 np0005485008 nova_compute[192512]: 2025-10-13 15:53:53.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:56 np0005485008 nova_compute[192512]: 2025-10-13 15:53:56.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:53:56 np0005485008 podman[219476]: 2025-10-13 15:53:56.748515762 +0000 UTC m=+0.052446080 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-type=git, release=1755695350, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, name=ubi9-minimal, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6)
Oct 13 11:53:58 np0005485008 nova_compute[192512]: 2025-10-13 15:53:58.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:01 np0005485008 nova_compute[192512]: 2025-10-13 15:54:01.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:03 np0005485008 nova_compute[192512]: 2025-10-13 15:54:03.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:05 np0005485008 podman[202884]: time="2025-10-13T15:54:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:54:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:54:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:54:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:54:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2997 "" "Go-http-client/1.1"
Oct 13 11:54:06 np0005485008 nova_compute[192512]: 2025-10-13 15:54:06.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:06 np0005485008 nova_compute[192512]: 2025-10-13 15:54:06.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:54:06 np0005485008 nova_compute[192512]: 2025-10-13 15:54:06.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 11:54:08 np0005485008 nova_compute[192512]: 2025-10-13 15:54:08.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:08 np0005485008 nova_compute[192512]: 2025-10-13 15:54:08.435 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:54:10 np0005485008 nova_compute[192512]: 2025-10-13 15:54:10.430 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:54:11 np0005485008 nova_compute[192512]: 2025-10-13 15:54:11.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:11 np0005485008 nova_compute[192512]: 2025-10-13 15:54:11.425 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:54:11 np0005485008 nova_compute[192512]: 2025-10-13 15:54:11.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:54:12 np0005485008 nova_compute[192512]: 2025-10-13 15:54:12.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:54:13 np0005485008 nova_compute[192512]: 2025-10-13 15:54:13.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:13 np0005485008 podman[219495]: 2025-10-13 15:54:13.768626107 +0000 UTC m=+0.070247345 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2)
Oct 13 11:54:13 np0005485008 podman[219503]: 2025-10-13 15:54:13.811603999 +0000 UTC m=+0.091627263 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 11:54:13 np0005485008 podman[219496]: 2025-10-13 15:54:13.81132653 +0000 UTC m=+0.103769551 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=iscsid, io.buildah.version=1.41.3)
Oct 13 11:54:13 np0005485008 podman[219511]: 2025-10-13 15:54:13.820997072 +0000 UTC m=+0.095574936 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 11:54:13 np0005485008 podman[219502]: 2025-10-13 15:54:13.836989782 +0000 UTC m=+0.111793242 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct 13 11:54:16 np0005485008 nova_compute[192512]: 2025-10-13 15:54:16.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:16 np0005485008 nova_compute[192512]: 2025-10-13 15:54:16.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:54:17 np0005485008 nova_compute[192512]: 2025-10-13 15:54:17.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:54:17.138 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:54:17 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:54:17.140 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 11:54:17 np0005485008 nova_compute[192512]: 2025-10-13 15:54:17.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:54:17 np0005485008 nova_compute[192512]: 2025-10-13 15:54:17.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 11:54:17 np0005485008 nova_compute[192512]: 2025-10-13 15:54:17.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 11:54:17 np0005485008 nova_compute[192512]: 2025-10-13 15:54:17.842 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 11:54:17 np0005485008 nova_compute[192512]: 2025-10-13 15:54:17.843 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:54:18 np0005485008 nova_compute[192512]: 2025-10-13 15:54:18.010 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:54:18 np0005485008 nova_compute[192512]: 2025-10-13 15:54:18.011 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:54:18 np0005485008 nova_compute[192512]: 2025-10-13 15:54:18.011 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:54:18 np0005485008 nova_compute[192512]: 2025-10-13 15:54:18.012 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:54:18 np0005485008 nova_compute[192512]: 2025-10-13 15:54:18.186 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:54:18 np0005485008 nova_compute[192512]: 2025-10-13 15:54:18.188 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5864MB free_disk=73.46574020385742GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:54:18 np0005485008 nova_compute[192512]: 2025-10-13 15:54:18.188 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:54:18 np0005485008 nova_compute[192512]: 2025-10-13 15:54:18.188 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:54:18 np0005485008 nova_compute[192512]: 2025-10-13 15:54:18.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:18 np0005485008 nova_compute[192512]: 2025-10-13 15:54:18.318 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:54:18 np0005485008 nova_compute[192512]: 2025-10-13 15:54:18.318 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:54:18 np0005485008 nova_compute[192512]: 2025-10-13 15:54:18.353 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:54:18 np0005485008 nova_compute[192512]: 2025-10-13 15:54:18.371 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:54:18 np0005485008 nova_compute[192512]: 2025-10-13 15:54:18.420 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 11:54:18 np0005485008 nova_compute[192512]: 2025-10-13 15:54:18.421 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.233s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:54:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:54:19.146 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:54:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:54:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:54:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:54:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:54:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:54:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:54:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:54:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:54:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:54:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:54:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:54:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:54:21 np0005485008 nova_compute[192512]: 2025-10-13 15:54:21.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:23 np0005485008 nova_compute[192512]: 2025-10-13 15:54:23.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:26 np0005485008 nova_compute[192512]: 2025-10-13 15:54:26.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:27 np0005485008 podman[219596]: 2025-10-13 15:54:27.752579437 +0000 UTC m=+0.059400297 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, config_id=edpm, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Oct 13 11:54:28 np0005485008 nova_compute[192512]: 2025-10-13 15:54:28.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:31 np0005485008 nova_compute[192512]: 2025-10-13 15:54:31.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:32 np0005485008 ovn_controller[94758]: 2025-10-13T15:54:32Z|00147|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 13 11:54:33 np0005485008 nova_compute[192512]: 2025-10-13 15:54:33.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:54:33.957 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:54:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:54:33.958 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:54:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:54:33.958 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:54:35 np0005485008 podman[202884]: time="2025-10-13T15:54:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:54:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:54:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:54:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:54:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3002 "" "Go-http-client/1.1"
Oct 13 11:54:36 np0005485008 nova_compute[192512]: 2025-10-13 15:54:36.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:38 np0005485008 nova_compute[192512]: 2025-10-13 15:54:38.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:41 np0005485008 nova_compute[192512]: 2025-10-13 15:54:41.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:43 np0005485008 nova_compute[192512]: 2025-10-13 15:54:43.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:44 np0005485008 podman[219619]: 2025-10-13 15:54:44.783377494 +0000 UTC m=+0.075021415 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 13 11:54:44 np0005485008 podman[219618]: 2025-10-13 15:54:44.786538983 +0000 UTC m=+0.078124002 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 11:54:44 np0005485008 podman[219620]: 2025-10-13 15:54:44.80953577 +0000 UTC m=+0.097878248 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 11:54:44 np0005485008 podman[219617]: 2025-10-13 15:54:44.827949036 +0000 UTC m=+0.118176142 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 13 11:54:44 np0005485008 podman[219621]: 2025-10-13 15:54:44.828140862 +0000 UTC m=+0.104591388 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:54:46 np0005485008 nova_compute[192512]: 2025-10-13 15:54:46.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:48 np0005485008 nova_compute[192512]: 2025-10-13 15:54:48.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:54:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:54:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:54:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:54:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:54:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:54:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:54:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:54:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:54:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:54:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:54:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:54:51 np0005485008 nova_compute[192512]: 2025-10-13 15:54:51.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:53 np0005485008 nova_compute[192512]: 2025-10-13 15:54:53.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:56 np0005485008 nova_compute[192512]: 2025-10-13 15:54:56.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:58 np0005485008 nova_compute[192512]: 2025-10-13 15:54:58.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:54:58 np0005485008 podman[219718]: 2025-10-13 15:54:58.763520055 +0000 UTC m=+0.058563780 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350)
Oct 13 11:55:01 np0005485008 nova_compute[192512]: 2025-10-13 15:55:01.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:03 np0005485008 nova_compute[192512]: 2025-10-13 15:55:03.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:05 np0005485008 podman[202884]: time="2025-10-13T15:55:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:55:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:55:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:55:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:55:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3000 "" "Go-http-client/1.1"
Oct 13 11:55:06 np0005485008 nova_compute[192512]: 2025-10-13 15:55:06.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:07 np0005485008 nova_compute[192512]: 2025-10-13 15:55:07.005 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:55:07 np0005485008 nova_compute[192512]: 2025-10-13 15:55:07.006 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 11:55:08 np0005485008 nova_compute[192512]: 2025-10-13 15:55:08.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:10 np0005485008 nova_compute[192512]: 2025-10-13 15:55:10.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:55:11 np0005485008 nova_compute[192512]: 2025-10-13 15:55:11.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:11 np0005485008 nova_compute[192512]: 2025-10-13 15:55:11.422 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:55:12 np0005485008 nova_compute[192512]: 2025-10-13 15:55:12.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:55:12 np0005485008 nova_compute[192512]: 2025-10-13 15:55:12.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:55:13 np0005485008 nova_compute[192512]: 2025-10-13 15:55:13.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:13 np0005485008 nova_compute[192512]: 2025-10-13 15:55:13.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:55:14 np0005485008 nova_compute[192512]: 2025-10-13 15:55:14.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:55:15 np0005485008 podman[219742]: 2025-10-13 15:55:15.779383399 +0000 UTC m=+0.069827803 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 11:55:15 np0005485008 podman[219740]: 2025-10-13 15:55:15.790432655 +0000 UTC m=+0.088350192 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, managed_by=edpm_ansible)
Oct 13 11:55:15 np0005485008 podman[219739]: 2025-10-13 15:55:15.790429105 +0000 UTC m=+0.092687399 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:55:15 np0005485008 podman[219741]: 2025-10-13 15:55:15.790428705 +0000 UTC m=+0.084313717 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 11:55:15 np0005485008 podman[219748]: 2025-10-13 15:55:15.841654106 +0000 UTC m=+0.129073316 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 11:55:15 np0005485008 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 13 11:55:16 np0005485008 nova_compute[192512]: 2025-10-13 15:55:16.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:16 np0005485008 nova_compute[192512]: 2025-10-13 15:55:16.441 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:55:16 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:16.948 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:55:16 np0005485008 nova_compute[192512]: 2025-10-13 15:55:16.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:16 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:16.950 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 11:55:17 np0005485008 nova_compute[192512]: 2025-10-13 15:55:17.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:55:17 np0005485008 nova_compute[192512]: 2025-10-13 15:55:17.455 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:55:17 np0005485008 nova_compute[192512]: 2025-10-13 15:55:17.455 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:55:17 np0005485008 nova_compute[192512]: 2025-10-13 15:55:17.456 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:55:17 np0005485008 nova_compute[192512]: 2025-10-13 15:55:17.456 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:55:17 np0005485008 nova_compute[192512]: 2025-10-13 15:55:17.620 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:55:17 np0005485008 nova_compute[192512]: 2025-10-13 15:55:17.621 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5859MB free_disk=73.46575927734375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:55:17 np0005485008 nova_compute[192512]: 2025-10-13 15:55:17.621 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:55:17 np0005485008 nova_compute[192512]: 2025-10-13 15:55:17.621 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:55:17 np0005485008 nova_compute[192512]: 2025-10-13 15:55:17.819 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:55:17 np0005485008 nova_compute[192512]: 2025-10-13 15:55:17.820 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:55:17 np0005485008 nova_compute[192512]: 2025-10-13 15:55:17.909 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:55:17 np0005485008 nova_compute[192512]: 2025-10-13 15:55:17.925 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:55:17 np0005485008 nova_compute[192512]: 2025-10-13 15:55:17.928 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 11:55:17 np0005485008 nova_compute[192512]: 2025-10-13 15:55:17.928 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:55:18 np0005485008 nova_compute[192512]: 2025-10-13 15:55:18.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:18 np0005485008 nova_compute[192512]: 2025-10-13 15:55:18.923 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:55:18 np0005485008 nova_compute[192512]: 2025-10-13 15:55:18.939 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:55:18 np0005485008 nova_compute[192512]: 2025-10-13 15:55:18.940 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 11:55:18 np0005485008 nova_compute[192512]: 2025-10-13 15:55:18.940 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 11:55:18 np0005485008 nova_compute[192512]: 2025-10-13 15:55:18.955 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 11:55:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:55:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:55:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:55:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:55:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:55:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:55:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:55:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:55:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:55:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:55:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:55:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:55:21 np0005485008 nova_compute[192512]: 2025-10-13 15:55:21.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:21.952 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:55:23 np0005485008 nova_compute[192512]: 2025-10-13 15:55:23.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:23 np0005485008 nova_compute[192512]: 2025-10-13 15:55:23.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:55:23 np0005485008 nova_compute[192512]: 2025-10-13 15:55:23.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 13 11:55:23 np0005485008 nova_compute[192512]: 2025-10-13 15:55:23.442 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 13 11:55:26 np0005485008 nova_compute[192512]: 2025-10-13 15:55:26.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:26 np0005485008 nova_compute[192512]: 2025-10-13 15:55:26.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:55:26 np0005485008 nova_compute[192512]: 2025-10-13 15:55:26.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 13 11:55:28 np0005485008 nova_compute[192512]: 2025-10-13 15:55:28.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:29 np0005485008 podman[219837]: 2025-10-13 15:55:29.774122017 +0000 UTC m=+0.081666894 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 11:55:31 np0005485008 nova_compute[192512]: 2025-10-13 15:55:31.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:33 np0005485008 nova_compute[192512]: 2025-10-13 15:55:33.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:33.958 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:55:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:33.959 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:55:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:33.960 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:55:34 np0005485008 nova_compute[192512]: 2025-10-13 15:55:34.764 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:55:35 np0005485008 podman[202884]: time="2025-10-13T15:55:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:55:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:55:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:55:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:55:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2997 "" "Go-http-client/1.1"
Oct 13 11:55:36 np0005485008 nova_compute[192512]: 2025-10-13 15:55:36.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.007 2 DEBUG oslo_concurrency.lockutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.008 2 DEBUG oslo_concurrency.lockutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.026 2 DEBUG nova.compute.manager [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.106 2 DEBUG oslo_concurrency.lockutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.107 2 DEBUG oslo_concurrency.lockutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.112 2 DEBUG nova.virt.hardware [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.113 2 INFO nova.compute.claims [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.208 2 DEBUG nova.compute.provider_tree [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.224 2 DEBUG nova.scheduler.client.report [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.258 2 DEBUG oslo_concurrency.lockutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.258 2 DEBUG nova.compute.manager [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.310 2 DEBUG nova.compute.manager [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.312 2 DEBUG nova.network.neutron [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.370 2 INFO nova.virt.libvirt.driver [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.400 2 DEBUG nova.compute.manager [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.483 2 DEBUG nova.compute.manager [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.485 2 DEBUG nova.virt.libvirt.driver [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.485 2 INFO nova.virt.libvirt.driver [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Creating image(s)#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.486 2 DEBUG oslo_concurrency.lockutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "/var/lib/nova/instances/ee07a001-cef5-44fc-907a-ce9ed6b68b02/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.486 2 DEBUG oslo_concurrency.lockutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "/var/lib/nova/instances/ee07a001-cef5-44fc-907a-ce9ed6b68b02/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.487 2 DEBUG oslo_concurrency.lockutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "/var/lib/nova/instances/ee07a001-cef5-44fc-907a-ce9ed6b68b02/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.499 2 DEBUG oslo_concurrency.processutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.559 2 DEBUG oslo_concurrency.processutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.560 2 DEBUG oslo_concurrency.lockutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.561 2 DEBUG oslo_concurrency.lockutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.571 2 DEBUG oslo_concurrency.processutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.633 2 DEBUG oslo_concurrency.processutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.635 2 DEBUG oslo_concurrency.processutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/ee07a001-cef5-44fc-907a-ce9ed6b68b02/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.668 2 DEBUG oslo_concurrency.processutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/ee07a001-cef5-44fc-907a-ce9ed6b68b02/disk 1073741824" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.669 2 DEBUG oslo_concurrency.lockutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.669 2 DEBUG oslo_concurrency.processutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.729 2 DEBUG oslo_concurrency.processutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.730 2 DEBUG nova.virt.disk.api [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Checking if we can resize image /var/lib/nova/instances/ee07a001-cef5-44fc-907a-ce9ed6b68b02/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.731 2 DEBUG oslo_concurrency.processutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee07a001-cef5-44fc-907a-ce9ed6b68b02/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.793 2 DEBUG oslo_concurrency.processutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee07a001-cef5-44fc-907a-ce9ed6b68b02/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.794 2 DEBUG nova.virt.disk.api [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Cannot resize image /var/lib/nova/instances/ee07a001-cef5-44fc-907a-ce9ed6b68b02/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.794 2 DEBUG nova.objects.instance [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lazy-loading 'migration_context' on Instance uuid ee07a001-cef5-44fc-907a-ce9ed6b68b02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.818 2 DEBUG nova.virt.libvirt.driver [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.818 2 DEBUG nova.virt.libvirt.driver [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Ensure instance console log exists: /var/lib/nova/instances/ee07a001-cef5-44fc-907a-ce9ed6b68b02/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.819 2 DEBUG oslo_concurrency.lockutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.819 2 DEBUG oslo_concurrency.lockutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:55:38 np0005485008 nova_compute[192512]: 2025-10-13 15:55:38.819 2 DEBUG oslo_concurrency.lockutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:55:39 np0005485008 nova_compute[192512]: 2025-10-13 15:55:39.163 2 DEBUG nova.network.neutron [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Successfully created port: 08102733-8c09-4638-ad9d-7300416d4e60 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 13 11:55:40 np0005485008 nova_compute[192512]: 2025-10-13 15:55:40.212 2 DEBUG nova.network.neutron [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Successfully updated port: 08102733-8c09-4638-ad9d-7300416d4e60 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 13 11:55:40 np0005485008 nova_compute[192512]: 2025-10-13 15:55:40.228 2 DEBUG oslo_concurrency.lockutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "refresh_cache-ee07a001-cef5-44fc-907a-ce9ed6b68b02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:55:40 np0005485008 nova_compute[192512]: 2025-10-13 15:55:40.228 2 DEBUG oslo_concurrency.lockutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquired lock "refresh_cache-ee07a001-cef5-44fc-907a-ce9ed6b68b02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:55:40 np0005485008 nova_compute[192512]: 2025-10-13 15:55:40.228 2 DEBUG nova.network.neutron [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 11:55:40 np0005485008 nova_compute[192512]: 2025-10-13 15:55:40.316 2 DEBUG nova.compute.manager [req-b790dd91-96bd-420e-b97c-74477d977bc7 req-3df573d0-db11-4aab-84e0-75d27ad85a1f 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Received event network-changed-08102733-8c09-4638-ad9d-7300416d4e60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:55:40 np0005485008 nova_compute[192512]: 2025-10-13 15:55:40.316 2 DEBUG nova.compute.manager [req-b790dd91-96bd-420e-b97c-74477d977bc7 req-3df573d0-db11-4aab-84e0-75d27ad85a1f 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Refreshing instance network info cache due to event network-changed-08102733-8c09-4638-ad9d-7300416d4e60. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 13 11:55:40 np0005485008 nova_compute[192512]: 2025-10-13 15:55:40.317 2 DEBUG oslo_concurrency.lockutils [req-b790dd91-96bd-420e-b97c-74477d977bc7 req-3df573d0-db11-4aab-84e0-75d27ad85a1f 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-ee07a001-cef5-44fc-907a-ce9ed6b68b02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:55:40 np0005485008 nova_compute[192512]: 2025-10-13 15:55:40.371 2 DEBUG nova.network.neutron [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.174 2 DEBUG nova.network.neutron [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Updating instance_info_cache with network_info: [{"id": "08102733-8c09-4638-ad9d-7300416d4e60", "address": "fa:16:3e:97:10:73", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08102733-8c", "ovs_interfaceid": "08102733-8c09-4638-ad9d-7300416d4e60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.195 2 DEBUG oslo_concurrency.lockutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Releasing lock "refresh_cache-ee07a001-cef5-44fc-907a-ce9ed6b68b02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.196 2 DEBUG nova.compute.manager [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Instance network_info: |[{"id": "08102733-8c09-4638-ad9d-7300416d4e60", "address": "fa:16:3e:97:10:73", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08102733-8c", "ovs_interfaceid": "08102733-8c09-4638-ad9d-7300416d4e60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.197 2 DEBUG oslo_concurrency.lockutils [req-b790dd91-96bd-420e-b97c-74477d977bc7 req-3df573d0-db11-4aab-84e0-75d27ad85a1f 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-ee07a001-cef5-44fc-907a-ce9ed6b68b02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.197 2 DEBUG nova.network.neutron [req-b790dd91-96bd-420e-b97c-74477d977bc7 req-3df573d0-db11-4aab-84e0-75d27ad85a1f 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Refreshing network info cache for port 08102733-8c09-4638-ad9d-7300416d4e60 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.202 2 DEBUG nova.virt.libvirt.driver [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Start _get_guest_xml network_info=[{"id": "08102733-8c09-4638-ad9d-7300416d4e60", "address": "fa:16:3e:97:10:73", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08102733-8c", "ovs_interfaceid": "08102733-8c09-4638-ad9d-7300416d4e60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'boot_index': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': 'dcd9fbd3-16ab-46e1-976e-0576b433c9d5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.208 2 WARNING nova.virt.libvirt.driver [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.215 2 DEBUG nova.virt.libvirt.host [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.215 2 DEBUG nova.virt.libvirt.host [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.223 2 DEBUG nova.virt.libvirt.host [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.224 2 DEBUG nova.virt.libvirt.host [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.224 2 DEBUG nova.virt.libvirt.driver [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.224 2 DEBUG nova.virt.hardware [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-13T15:39:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ddff5c88-cac9-460e-8ffb-1e9b9a7c2a59',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.225 2 DEBUG nova.virt.hardware [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.225 2 DEBUG nova.virt.hardware [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.226 2 DEBUG nova.virt.hardware [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.226 2 DEBUG nova.virt.hardware [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.227 2 DEBUG nova.virt.hardware [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.227 2 DEBUG nova.virt.hardware [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.227 2 DEBUG nova.virt.hardware [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.228 2 DEBUG nova.virt.hardware [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.228 2 DEBUG nova.virt.hardware [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.228 2 DEBUG nova.virt.hardware [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.234 2 DEBUG nova.virt.libvirt.vif [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T15:55:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1336596117',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1336596117',id=14,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-mmwu6mm5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:55:38Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=ee07a001-cef5-44fc-907a-ce9ed6b68b02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "08102733-8c09-4638-ad9d-7300416d4e60", "address": "fa:16:3e:97:10:73", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08102733-8c", "ovs_interfaceid": "08102733-8c09-4638-ad9d-7300416d4e60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.234 2 DEBUG nova.network.os_vif_util [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converting VIF {"id": "08102733-8c09-4638-ad9d-7300416d4e60", "address": "fa:16:3e:97:10:73", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08102733-8c", "ovs_interfaceid": "08102733-8c09-4638-ad9d-7300416d4e60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.235 2 DEBUG nova.network.os_vif_util [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:10:73,bridge_name='br-int',has_traffic_filtering=True,id=08102733-8c09-4638-ad9d-7300416d4e60,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08102733-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.236 2 DEBUG nova.objects.instance [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lazy-loading 'pci_devices' on Instance uuid ee07a001-cef5-44fc-907a-ce9ed6b68b02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.258 2 DEBUG nova.virt.libvirt.driver [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] End _get_guest_xml xml=<domain type="kvm">
Oct 13 11:55:41 np0005485008 nova_compute[192512]:  <uuid>ee07a001-cef5-44fc-907a-ce9ed6b68b02</uuid>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:  <name>instance-0000000e</name>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:  <memory>131072</memory>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:  <vcpu>1</vcpu>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:  <metadata>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <nova:name>tempest-TestExecuteStrategies-server-1336596117</nova:name>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <nova:creationTime>2025-10-13 15:55:41</nova:creationTime>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <nova:flavor name="m1.nano">
Oct 13 11:55:41 np0005485008 nova_compute[192512]:        <nova:memory>128</nova:memory>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:        <nova:disk>1</nova:disk>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:        <nova:swap>0</nova:swap>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:        <nova:ephemeral>0</nova:ephemeral>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:        <nova:vcpus>1</nova:vcpus>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      </nova:flavor>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <nova:owner>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:        <nova:user uuid="3f85e781b03b405795a2079908bd2792">tempest-TestExecuteStrategies-1416319229-project-admin</nova:user>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:        <nova:project uuid="4d9418fd42c841d38cbfc7819a3fca65">tempest-TestExecuteStrategies-1416319229</nova:project>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      </nova:owner>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <nova:root type="image" uuid="dcd9fbd3-16ab-46e1-976e-0576b433c9d5"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <nova:ports>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:        <nova:port uuid="08102733-8c09-4638-ad9d-7300416d4e60">
Oct 13 11:55:41 np0005485008 nova_compute[192512]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:        </nova:port>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      </nova:ports>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    </nova:instance>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:  </metadata>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:  <sysinfo type="smbios">
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <system>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <entry name="manufacturer">RDO</entry>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <entry name="product">OpenStack Compute</entry>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <entry name="serial">ee07a001-cef5-44fc-907a-ce9ed6b68b02</entry>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <entry name="uuid">ee07a001-cef5-44fc-907a-ce9ed6b68b02</entry>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <entry name="family">Virtual Machine</entry>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    </system>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:  </sysinfo>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:  <os>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <boot dev="hd"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <smbios mode="sysinfo"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:  </os>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:  <features>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <acpi/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <apic/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <vmcoreinfo/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:  </features>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:  <clock offset="utc">
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <timer name="pit" tickpolicy="delay"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <timer name="hpet" present="no"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:  </clock>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:  <cpu mode="host-model" match="exact">
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <topology sockets="1" cores="1" threads="1"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:  </cpu>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:  <devices>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <disk type="file" device="disk">
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <driver name="qemu" type="qcow2" cache="none"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/ee07a001-cef5-44fc-907a-ce9ed6b68b02/disk"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <target dev="vda" bus="virtio"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    </disk>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <disk type="file" device="cdrom">
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <driver name="qemu" type="raw" cache="none"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/ee07a001-cef5-44fc-907a-ce9ed6b68b02/disk.config"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <target dev="sda" bus="sata"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    </disk>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <interface type="ethernet">
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <mac address="fa:16:3e:97:10:73"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <driver name="vhost" rx_queue_size="512"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <mtu size="1442"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <target dev="tap08102733-8c"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    </interface>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <serial type="pty">
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <log file="/var/lib/nova/instances/ee07a001-cef5-44fc-907a-ce9ed6b68b02/console.log" append="off"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    </serial>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <video>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    </video>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <input type="tablet" bus="usb"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <rng model="virtio">
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <backend model="random">/dev/urandom</backend>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    </rng>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <controller type="usb" index="0"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    <memballoon model="virtio">
Oct 13 11:55:41 np0005485008 nova_compute[192512]:      <stats period="10"/>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:    </memballoon>
Oct 13 11:55:41 np0005485008 nova_compute[192512]:  </devices>
Oct 13 11:55:41 np0005485008 nova_compute[192512]: </domain>
Oct 13 11:55:41 np0005485008 nova_compute[192512]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.259 2 DEBUG nova.compute.manager [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Preparing to wait for external event network-vif-plugged-08102733-8c09-4638-ad9d-7300416d4e60 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.260 2 DEBUG oslo_concurrency.lockutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.260 2 DEBUG oslo_concurrency.lockutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.261 2 DEBUG oslo_concurrency.lockutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.262 2 DEBUG nova.virt.libvirt.vif [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T15:55:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1336596117',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1336596117',id=14,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-mmwu6mm5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:55:38Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=ee07a001-cef5-44fc-907a-ce9ed6b68b02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "08102733-8c09-4638-ad9d-7300416d4e60", "address": "fa:16:3e:97:10:73", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08102733-8c", "ovs_interfaceid": "08102733-8c09-4638-ad9d-7300416d4e60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.263 2 DEBUG nova.network.os_vif_util [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converting VIF {"id": "08102733-8c09-4638-ad9d-7300416d4e60", "address": "fa:16:3e:97:10:73", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08102733-8c", "ovs_interfaceid": "08102733-8c09-4638-ad9d-7300416d4e60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.264 2 DEBUG nova.network.os_vif_util [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:10:73,bridge_name='br-int',has_traffic_filtering=True,id=08102733-8c09-4638-ad9d-7300416d4e60,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08102733-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.264 2 DEBUG os_vif [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:10:73,bridge_name='br-int',has_traffic_filtering=True,id=08102733-8c09-4638-ad9d-7300416d4e60,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08102733-8c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.266 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.267 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.272 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08102733-8c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.273 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap08102733-8c, col_values=(('external_ids', {'iface-id': '08102733-8c09-4638-ad9d-7300416d4e60', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:10:73', 'vm-uuid': 'ee07a001-cef5-44fc-907a-ce9ed6b68b02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:41 np0005485008 NetworkManager[51587]: <info>  [1760370941.2777] manager: (tap08102733-8c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.285 2 INFO os_vif [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:10:73,bridge_name='br-int',has_traffic_filtering=True,id=08102733-8c09-4638-ad9d-7300416d4e60,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08102733-8c')#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.351 2 DEBUG nova.virt.libvirt.driver [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.352 2 DEBUG nova.virt.libvirt.driver [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.352 2 DEBUG nova.virt.libvirt.driver [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] No VIF found with MAC fa:16:3e:97:10:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.353 2 INFO nova.virt.libvirt.driver [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Using config drive#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.727 2 INFO nova.virt.libvirt.driver [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Creating config drive at /var/lib/nova/instances/ee07a001-cef5-44fc-907a-ce9ed6b68b02/disk.config#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.735 2 DEBUG oslo_concurrency.processutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ee07a001-cef5-44fc-907a-ce9ed6b68b02/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp69hzrf0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.862 2 DEBUG oslo_concurrency.processutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ee07a001-cef5-44fc-907a-ce9ed6b68b02/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp69hzrf0" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:55:41 np0005485008 kernel: tap08102733-8c: entered promiscuous mode
Oct 13 11:55:41 np0005485008 NetworkManager[51587]: <info>  [1760370941.9305] manager: (tap08102733-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:41 np0005485008 ovn_controller[94758]: 2025-10-13T15:55:41Z|00148|binding|INFO|Claiming lport 08102733-8c09-4638-ad9d-7300416d4e60 for this chassis.
Oct 13 11:55:41 np0005485008 ovn_controller[94758]: 2025-10-13T15:55:41Z|00149|binding|INFO|08102733-8c09-4638-ad9d-7300416d4e60: Claiming fa:16:3e:97:10:73 10.100.0.12
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:41 np0005485008 nova_compute[192512]: 2025-10-13 15:55:41.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:41 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:41.955 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:10:73 10.100.0.12'], port_security=['fa:16:3e:97:10:73 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'ee07a001-cef5-44fc-907a-ce9ed6b68b02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=08102733-8c09-4638-ad9d-7300416d4e60) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:55:41 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:41.956 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 08102733-8c09-4638-ad9d-7300416d4e60 in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae bound to our chassis#033[00m
Oct 13 11:55:41 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:41.957 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae#033[00m
Oct 13 11:55:41 np0005485008 systemd-udevd[219893]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:55:41 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:41.972 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[d8f3a5e9-82b8-4813-9d1c-506664535307]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:55:41 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:41.973 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap39a43da9-c1 in ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 13 11:55:41 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:41.976 214965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap39a43da9-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 13 11:55:41 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:41.976 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[4c9d2941-c0b6-4d8f-80ba-baaa9cf90433]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:55:41 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:41.979 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[879ded41-d503-465f-9c5d-4362ed1b38c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:55:41 np0005485008 NetworkManager[51587]: <info>  [1760370941.9873] device (tap08102733-8c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 11:55:41 np0005485008 NetworkManager[51587]: <info>  [1760370941.9879] device (tap08102733-8c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 11:55:41 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:41.992 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[2f6dfcc7-f081-4868-9fc4-72b8cd0be5a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:55:41 np0005485008 systemd-machined[152551]: New machine qemu-12-instance-0000000e.
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:42.012 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[218f54be-c7df-42e2-8684-d7218d77a6da]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:55:42 np0005485008 ovn_controller[94758]: 2025-10-13T15:55:42Z|00150|binding|INFO|Setting lport 08102733-8c09-4638-ad9d-7300416d4e60 ovn-installed in OVS
Oct 13 11:55:42 np0005485008 ovn_controller[94758]: 2025-10-13T15:55:42Z|00151|binding|INFO|Setting lport 08102733-8c09-4638-ad9d-7300416d4e60 up in Southbound
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:42.039 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[ed9abbae-07ce-42c1-bd9c-4976272ca6ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:42.057 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[35601e05-74cc-40e5-88cd-d7c5e5eb4915]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:55:42 np0005485008 systemd[1]: Started Virtual Machine qemu-12-instance-0000000e.
Oct 13 11:55:42 np0005485008 NetworkManager[51587]: <info>  [1760370942.0595] manager: (tap39a43da9-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/60)
Oct 13 11:55:42 np0005485008 nova_compute[192512]: 2025-10-13 15:55:42.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:42.094 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[b4bf4301-ec7b-4315-a763-66e8aa0df7b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:42.098 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[55516f35-acc6-42dd-90f6-a9f5df0c3e21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:55:42 np0005485008 NetworkManager[51587]: <info>  [1760370942.1220] device (tap39a43da9-c0): carrier: link connected
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:42.128 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[727f0a7d-5f96-438f-a0e5-2078c00d815d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:42.146 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[cc568d03-8bf4-4f68-851c-bfe16c71f694]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447978, 'reachable_time': 28188, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 219926, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:42.161 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[1c7112d1-5197-4227-8fe2-4b0988146f38]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:43e9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447978, 'tstamp': 447978}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 219927, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:42.181 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[aa91c497-993f-4f83-88f2-cc53a050dd1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447978, 'reachable_time': 28188, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 219928, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:42.213 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[be99c25f-082b-4e12-9588-fa4bc2d09353]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:42.282 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[0cf9a44e-071b-45b6-af90-d7abf9761339]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:42.284 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:42.284 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:42.285 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39a43da9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:55:42 np0005485008 nova_compute[192512]: 2025-10-13 15:55:42.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:42 np0005485008 kernel: tap39a43da9-c0: entered promiscuous mode
Oct 13 11:55:42 np0005485008 NetworkManager[51587]: <info>  [1760370942.2887] manager: (tap39a43da9-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Oct 13 11:55:42 np0005485008 nova_compute[192512]: 2025-10-13 15:55:42.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:42.292 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39a43da9-c0, col_values=(('external_ids', {'iface-id': '5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:55:42 np0005485008 nova_compute[192512]: 2025-10-13 15:55:42.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:42 np0005485008 ovn_controller[94758]: 2025-10-13T15:55:42Z|00152|binding|INFO|Releasing lport 5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182 from this chassis (sb_readonly=0)
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:42.294 103642 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/39a43da9-cf4c-4fe3-ab73-bf8705320dae.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/39a43da9-cf4c-4fe3-ab73-bf8705320dae.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:42.295 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[03179955-e4ac-4da1-9785-f0d505bd4c3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:42.296 103642 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: global
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]:    log         /dev/log local0 debug
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]:    log-tag     haproxy-metadata-proxy-39a43da9-cf4c-4fe3-ab73-bf8705320dae
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]:    user        root
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]:    group       root
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]:    maxconn     1024
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]:    pidfile     /var/lib/neutron/external/pids/39a43da9-cf4c-4fe3-ab73-bf8705320dae.pid.haproxy
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]:    daemon
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: defaults
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]:    log global
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]:    mode http
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]:    option httplog
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]:    option dontlognull
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]:    option http-server-close
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]:    option forwardfor
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]:    retries                 3
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]:    timeout http-request    30s
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]:    timeout connect         30s
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]:    timeout client          32s
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]:    timeout server          32s
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]:    timeout http-keep-alive 30s
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: listen listener
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]:    bind 169.254.169.254:80
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]:    server metadata /var/lib/neutron/metadata_proxy
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]:    http-request add-header X-OVN-Network-ID 39a43da9-cf4c-4fe3-ab73-bf8705320dae
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 13 11:55:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:55:42.299 103642 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'env', 'PROCESS_TAG=haproxy-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/39a43da9-cf4c-4fe3-ab73-bf8705320dae.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 13 11:55:42 np0005485008 nova_compute[192512]: 2025-10-13 15:55:42.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:42 np0005485008 nova_compute[192512]: 2025-10-13 15:55:42.469 2 DEBUG nova.compute.manager [req-e4aa0f10-4f4d-4ec0-a52e-ec05d6366e3d req-c693992c-5636-4e39-97e1-c15a39f12f53 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Received event network-vif-plugged-08102733-8c09-4638-ad9d-7300416d4e60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:55:42 np0005485008 nova_compute[192512]: 2025-10-13 15:55:42.469 2 DEBUG oslo_concurrency.lockutils [req-e4aa0f10-4f4d-4ec0-a52e-ec05d6366e3d req-c693992c-5636-4e39-97e1-c15a39f12f53 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:55:42 np0005485008 nova_compute[192512]: 2025-10-13 15:55:42.469 2 DEBUG oslo_concurrency.lockutils [req-e4aa0f10-4f4d-4ec0-a52e-ec05d6366e3d req-c693992c-5636-4e39-97e1-c15a39f12f53 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:55:42 np0005485008 nova_compute[192512]: 2025-10-13 15:55:42.470 2 DEBUG oslo_concurrency.lockutils [req-e4aa0f10-4f4d-4ec0-a52e-ec05d6366e3d req-c693992c-5636-4e39-97e1-c15a39f12f53 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:55:42 np0005485008 nova_compute[192512]: 2025-10-13 15:55:42.470 2 DEBUG nova.compute.manager [req-e4aa0f10-4f4d-4ec0-a52e-ec05d6366e3d req-c693992c-5636-4e39-97e1-c15a39f12f53 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Processing event network-vif-plugged-08102733-8c09-4638-ad9d-7300416d4e60 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 13 11:55:42 np0005485008 nova_compute[192512]: 2025-10-13 15:55:42.480 2 DEBUG nova.network.neutron [req-b790dd91-96bd-420e-b97c-74477d977bc7 req-3df573d0-db11-4aab-84e0-75d27ad85a1f 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Updated VIF entry in instance network info cache for port 08102733-8c09-4638-ad9d-7300416d4e60. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 13 11:55:42 np0005485008 nova_compute[192512]: 2025-10-13 15:55:42.481 2 DEBUG nova.network.neutron [req-b790dd91-96bd-420e-b97c-74477d977bc7 req-3df573d0-db11-4aab-84e0-75d27ad85a1f 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Updating instance_info_cache with network_info: [{"id": "08102733-8c09-4638-ad9d-7300416d4e60", "address": "fa:16:3e:97:10:73", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08102733-8c", "ovs_interfaceid": "08102733-8c09-4638-ad9d-7300416d4e60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:55:42 np0005485008 nova_compute[192512]: 2025-10-13 15:55:42.501 2 DEBUG oslo_concurrency.lockutils [req-b790dd91-96bd-420e-b97c-74477d977bc7 req-3df573d0-db11-4aab-84e0-75d27ad85a1f 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-ee07a001-cef5-44fc-907a-ce9ed6b68b02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:55:42 np0005485008 podman[219967]: 2025-10-13 15:55:42.709965618 +0000 UTC m=+0.052398869 container create 9922f9850f282314c9ee7a2fd6c9956bec39c87225624ac0752bd61b6b82192e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:55:42 np0005485008 systemd[1]: Started libpod-conmon-9922f9850f282314c9ee7a2fd6c9956bec39c87225624ac0752bd61b6b82192e.scope.
Oct 13 11:55:42 np0005485008 systemd[1]: Started libcrun container.
Oct 13 11:55:42 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d05ee41532de44f8aa382ca171b8840b0db8ed161ad192f72626aa10c200dea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 11:55:42 np0005485008 podman[219967]: 2025-10-13 15:55:42.682363735 +0000 UTC m=+0.024797006 image pull f5d8e43d5fd113645e71c7e8980543c233298a7561a4c95ab3af27cb119e70b7 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 13 11:55:42 np0005485008 podman[219967]: 2025-10-13 15:55:42.785249312 +0000 UTC m=+0.127682583 container init 9922f9850f282314c9ee7a2fd6c9956bec39c87225624ac0752bd61b6b82192e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:55:42 np0005485008 podman[219967]: 2025-10-13 15:55:42.791205237 +0000 UTC m=+0.133638488 container start 9922f9850f282314c9ee7a2fd6c9956bec39c87225624ac0752bd61b6b82192e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 11:55:42 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[219982]: [NOTICE]   (219986) : New worker (219988) forked
Oct 13 11:55:42 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[219982]: [NOTICE]   (219986) : Loading success.
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.054 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370943.0538335, ee07a001-cef5-44fc-907a-ce9ed6b68b02 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.055 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] VM Started (Lifecycle Event)#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.057 2 DEBUG nova.compute.manager [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.060 2 DEBUG nova.virt.libvirt.driver [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.064 2 INFO nova.virt.libvirt.driver [-] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Instance spawned successfully.#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.064 2 DEBUG nova.virt.libvirt.driver [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.118 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.125 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.129 2 DEBUG nova.virt.libvirt.driver [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.129 2 DEBUG nova.virt.libvirt.driver [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.130 2 DEBUG nova.virt.libvirt.driver [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.130 2 DEBUG nova.virt.libvirt.driver [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.131 2 DEBUG nova.virt.libvirt.driver [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.131 2 DEBUG nova.virt.libvirt.driver [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.151 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.152 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370943.0541863, ee07a001-cef5-44fc-907a-ce9ed6b68b02 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.152 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] VM Paused (Lifecycle Event)#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.176 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.180 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370943.0606146, ee07a001-cef5-44fc-907a-ce9ed6b68b02 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.180 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] VM Resumed (Lifecycle Event)#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.184 2 INFO nova.compute.manager [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Took 4.70 seconds to spawn the instance on the hypervisor.#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.185 2 DEBUG nova.compute.manager [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.196 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.200 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.240 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.279 2 INFO nova.compute.manager [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Took 5.21 seconds to build instance.#033[00m
Oct 13 11:55:43 np0005485008 nova_compute[192512]: 2025-10-13 15:55:43.298 2 DEBUG oslo_concurrency.lockutils [None req-f61f6e11-d8d9-4182-8f79-494cbc38672a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.289s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:55:44 np0005485008 nova_compute[192512]: 2025-10-13 15:55:44.547 2 DEBUG nova.compute.manager [req-93f67b3d-dd9e-4348-af0e-3f76463a8805 req-9ca63bbc-1c9a-484f-a772-5f8d24a43d26 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Received event network-vif-plugged-08102733-8c09-4638-ad9d-7300416d4e60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:55:44 np0005485008 nova_compute[192512]: 2025-10-13 15:55:44.548 2 DEBUG oslo_concurrency.lockutils [req-93f67b3d-dd9e-4348-af0e-3f76463a8805 req-9ca63bbc-1c9a-484f-a772-5f8d24a43d26 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:55:44 np0005485008 nova_compute[192512]: 2025-10-13 15:55:44.548 2 DEBUG oslo_concurrency.lockutils [req-93f67b3d-dd9e-4348-af0e-3f76463a8805 req-9ca63bbc-1c9a-484f-a772-5f8d24a43d26 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:55:44 np0005485008 nova_compute[192512]: 2025-10-13 15:55:44.548 2 DEBUG oslo_concurrency.lockutils [req-93f67b3d-dd9e-4348-af0e-3f76463a8805 req-9ca63bbc-1c9a-484f-a772-5f8d24a43d26 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:55:44 np0005485008 nova_compute[192512]: 2025-10-13 15:55:44.549 2 DEBUG nova.compute.manager [req-93f67b3d-dd9e-4348-af0e-3f76463a8805 req-9ca63bbc-1c9a-484f-a772-5f8d24a43d26 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] No waiting events found dispatching network-vif-plugged-08102733-8c09-4638-ad9d-7300416d4e60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:55:44 np0005485008 nova_compute[192512]: 2025-10-13 15:55:44.549 2 WARNING nova.compute.manager [req-93f67b3d-dd9e-4348-af0e-3f76463a8805 req-9ca63bbc-1c9a-484f-a772-5f8d24a43d26 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Received unexpected event network-vif-plugged-08102733-8c09-4638-ad9d-7300416d4e60 for instance with vm_state active and task_state None.#033[00m
Oct 13 11:55:46 np0005485008 nova_compute[192512]: 2025-10-13 15:55:46.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:46 np0005485008 nova_compute[192512]: 2025-10-13 15:55:46.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:46 np0005485008 podman[219999]: 2025-10-13 15:55:46.779075072 +0000 UTC m=+0.066240841 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:55:46 np0005485008 podman[219998]: 2025-10-13 15:55:46.791696506 +0000 UTC m=+0.082013614 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid)
Oct 13 11:55:46 np0005485008 podman[220000]: 2025-10-13 15:55:46.79853022 +0000 UTC m=+0.075036896 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 11:55:46 np0005485008 podman[219997]: 2025-10-13 15:55:46.819290869 +0000 UTC m=+0.110340120 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:55:46 np0005485008 podman[220001]: 2025-10-13 15:55:46.829314482 +0000 UTC m=+0.108512052 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 13 11:55:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:55:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:55:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:55:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:55:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:55:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:55:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:55:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:55:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:55:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:55:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:55:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:55:51 np0005485008 nova_compute[192512]: 2025-10-13 15:55:51.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:51 np0005485008 nova_compute[192512]: 2025-10-13 15:55:51.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:55 np0005485008 ovn_controller[94758]: 2025-10-13T15:55:55Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:97:10:73 10.100.0.12
Oct 13 11:55:55 np0005485008 ovn_controller[94758]: 2025-10-13T15:55:55Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:97:10:73 10.100.0.12
Oct 13 11:55:56 np0005485008 nova_compute[192512]: 2025-10-13 15:55:56.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:55:56 np0005485008 nova_compute[192512]: 2025-10-13 15:55:56.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:00 np0005485008 podman[220110]: 2025-10-13 15:56:00.810708673 +0000 UTC m=+0.099540012 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, release=1755695350, version=9.6, config_id=edpm, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, distribution-scope=public, architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container)
Oct 13 11:56:01 np0005485008 nova_compute[192512]: 2025-10-13 15:56:01.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:01 np0005485008 nova_compute[192512]: 2025-10-13 15:56:01.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:05 np0005485008 podman[202884]: time="2025-10-13T15:56:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:56:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:56:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 11:56:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:56:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3461 "" "Go-http-client/1.1"
Oct 13 11:56:06 np0005485008 nova_compute[192512]: 2025-10-13 15:56:06.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:06 np0005485008 nova_compute[192512]: 2025-10-13 15:56:06.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:07 np0005485008 nova_compute[192512]: 2025-10-13 15:56:07.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:56:07 np0005485008 nova_compute[192512]: 2025-10-13 15:56:07.430 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 11:56:11 np0005485008 nova_compute[192512]: 2025-10-13 15:56:11.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:11 np0005485008 nova_compute[192512]: 2025-10-13 15:56:11.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:11 np0005485008 nova_compute[192512]: 2025-10-13 15:56:11.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:56:12 np0005485008 ovn_controller[94758]: 2025-10-13T15:56:12Z|00153|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Oct 13 11:56:12 np0005485008 nova_compute[192512]: 2025-10-13 15:56:12.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:56:13 np0005485008 nova_compute[192512]: 2025-10-13 15:56:13.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:56:14 np0005485008 nova_compute[192512]: 2025-10-13 15:56:14.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:56:15 np0005485008 nova_compute[192512]: 2025-10-13 15:56:15.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:56:16 np0005485008 nova_compute[192512]: 2025-10-13 15:56:16.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:16 np0005485008 nova_compute[192512]: 2025-10-13 15:56:16.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:16 np0005485008 nova_compute[192512]: 2025-10-13 15:56:16.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:56:17 np0005485008 podman[220135]: 2025-10-13 15:56:17.768097073 +0000 UTC m=+0.056208438 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 11:56:17 np0005485008 podman[220132]: 2025-10-13 15:56:17.768625869 +0000 UTC m=+0.069530985 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0)
Oct 13 11:56:17 np0005485008 podman[220134]: 2025-10-13 15:56:17.787688005 +0000 UTC m=+0.079929370 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 11:56:17 np0005485008 podman[220133]: 2025-10-13 15:56:17.79872598 +0000 UTC m=+0.090062316 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 13 11:56:17 np0005485008 podman[220141]: 2025-10-13 15:56:17.812588793 +0000 UTC m=+0.096426075 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 13 11:56:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:56:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:56:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:56:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:56:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:56:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:56:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:56:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:56:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:56:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:56:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:56:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:56:19 np0005485008 nova_compute[192512]: 2025-10-13 15:56:19.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:56:19 np0005485008 nova_compute[192512]: 2025-10-13 15:56:19.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 11:56:19 np0005485008 nova_compute[192512]: 2025-10-13 15:56:19.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 11:56:19 np0005485008 nova_compute[192512]: 2025-10-13 15:56:19.761 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "refresh_cache-ee07a001-cef5-44fc-907a-ce9ed6b68b02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:56:19 np0005485008 nova_compute[192512]: 2025-10-13 15:56:19.761 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquired lock "refresh_cache-ee07a001-cef5-44fc-907a-ce9ed6b68b02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:56:19 np0005485008 nova_compute[192512]: 2025-10-13 15:56:19.762 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 13 11:56:19 np0005485008 nova_compute[192512]: 2025-10-13 15:56:19.762 2 DEBUG nova.objects.instance [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ee07a001-cef5-44fc-907a-ce9ed6b68b02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:56:20 np0005485008 nova_compute[192512]: 2025-10-13 15:56:20.943 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Updating instance_info_cache with network_info: [{"id": "08102733-8c09-4638-ad9d-7300416d4e60", "address": "fa:16:3e:97:10:73", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08102733-8c", "ovs_interfaceid": "08102733-8c09-4638-ad9d-7300416d4e60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:56:20 np0005485008 nova_compute[192512]: 2025-10-13 15:56:20.962 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Releasing lock "refresh_cache-ee07a001-cef5-44fc-907a-ce9ed6b68b02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:56:20 np0005485008 nova_compute[192512]: 2025-10-13 15:56:20.962 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 13 11:56:20 np0005485008 nova_compute[192512]: 2025-10-13 15:56:20.963 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:56:21 np0005485008 nova_compute[192512]: 2025-10-13 15:56:21.005 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:56:21 np0005485008 nova_compute[192512]: 2025-10-13 15:56:21.006 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:56:21 np0005485008 nova_compute[192512]: 2025-10-13 15:56:21.006 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:56:21 np0005485008 nova_compute[192512]: 2025-10-13 15:56:21.007 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:56:21 np0005485008 nova_compute[192512]: 2025-10-13 15:56:21.077 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee07a001-cef5-44fc-907a-ce9ed6b68b02/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:56:21 np0005485008 nova_compute[192512]: 2025-10-13 15:56:21.170 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee07a001-cef5-44fc-907a-ce9ed6b68b02/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:56:21 np0005485008 nova_compute[192512]: 2025-10-13 15:56:21.171 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee07a001-cef5-44fc-907a-ce9ed6b68b02/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:56:21 np0005485008 nova_compute[192512]: 2025-10-13 15:56:21.257 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ee07a001-cef5-44fc-907a-ce9ed6b68b02/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:56:21 np0005485008 nova_compute[192512]: 2025-10-13 15:56:21.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:21 np0005485008 nova_compute[192512]: 2025-10-13 15:56:21.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:21 np0005485008 nova_compute[192512]: 2025-10-13 15:56:21.508 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:56:21 np0005485008 nova_compute[192512]: 2025-10-13 15:56:21.510 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5691MB free_disk=73.43648147583008GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:56:21 np0005485008 nova_compute[192512]: 2025-10-13 15:56:21.511 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:56:21 np0005485008 nova_compute[192512]: 2025-10-13 15:56:21.512 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:56:21 np0005485008 nova_compute[192512]: 2025-10-13 15:56:21.595 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance ee07a001-cef5-44fc-907a-ce9ed6b68b02 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 13 11:56:21 np0005485008 nova_compute[192512]: 2025-10-13 15:56:21.596 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:56:21 np0005485008 nova_compute[192512]: 2025-10-13 15:56:21.596 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:56:21 np0005485008 nova_compute[192512]: 2025-10-13 15:56:21.730 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:56:21 np0005485008 nova_compute[192512]: 2025-10-13 15:56:21.748 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:56:21 np0005485008 nova_compute[192512]: 2025-10-13 15:56:21.774 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 11:56:21 np0005485008 nova_compute[192512]: 2025-10-13 15:56:21.775 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.263s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:56:26 np0005485008 nova_compute[192512]: 2025-10-13 15:56:26.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:26 np0005485008 nova_compute[192512]: 2025-10-13 15:56:26.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:26 np0005485008 nova_compute[192512]: 2025-10-13 15:56:26.471 2 DEBUG nova.virt.libvirt.driver [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Creating tmpfile /var/lib/nova/instances/tmpk6yq6qu6 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Oct 13 11:56:26 np0005485008 nova_compute[192512]: 2025-10-13 15:56:26.554 2 DEBUG nova.compute.manager [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=73728,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpk6yq6qu6',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Oct 13 11:56:27 np0005485008 nova_compute[192512]: 2025-10-13 15:56:27.717 2 DEBUG nova.compute.manager [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpk6yq6qu6',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='a309eb32-dc11-4530-b347-a465889a0cbb',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Oct 13 11:56:27 np0005485008 nova_compute[192512]: 2025-10-13 15:56:27.751 2 DEBUG oslo_concurrency.lockutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-a309eb32-dc11-4530-b347-a465889a0cbb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:56:27 np0005485008 nova_compute[192512]: 2025-10-13 15:56:27.752 2 DEBUG oslo_concurrency.lockutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-a309eb32-dc11-4530-b347-a465889a0cbb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:56:27 np0005485008 nova_compute[192512]: 2025-10-13 15:56:27.752 2 DEBUG nova.network.neutron [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.249 2 DEBUG nova.network.neutron [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Updating instance_info_cache with network_info: [{"id": "3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a", "address": "fa:16:3e:f4:dd:ea", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3699c1ee-83", "ovs_interfaceid": "3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.277 2 DEBUG oslo_concurrency.lockutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-a309eb32-dc11-4530-b347-a465889a0cbb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.279 2 DEBUG nova.virt.libvirt.driver [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpk6yq6qu6',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='a309eb32-dc11-4530-b347-a465889a0cbb',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.279 2 DEBUG nova.virt.libvirt.driver [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Creating instance directory: /var/lib/nova/instances/a309eb32-dc11-4530-b347-a465889a0cbb pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.280 2 DEBUG nova.virt.libvirt.driver [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Creating disk.info with the contents: {'/var/lib/nova/instances/a309eb32-dc11-4530-b347-a465889a0cbb/disk': 'qcow2', '/var/lib/nova/instances/a309eb32-dc11-4530-b347-a465889a0cbb/disk.config': 'raw'} pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10854#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.280 2 DEBUG nova.virt.libvirt.driver [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10864#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.281 2 DEBUG nova.objects.instance [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid a309eb32-dc11-4530-b347-a465889a0cbb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.307 2 DEBUG oslo_concurrency.processutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.406 2 DEBUG oslo_concurrency.processutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.408 2 DEBUG oslo_concurrency.lockutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.409 2 DEBUG oslo_concurrency.lockutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.429 2 DEBUG oslo_concurrency.processutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.488 2 DEBUG oslo_concurrency.processutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.490 2 DEBUG oslo_concurrency.processutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/a309eb32-dc11-4530-b347-a465889a0cbb/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.521 2 DEBUG oslo_concurrency.processutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/a309eb32-dc11-4530-b347-a465889a0cbb/disk 1073741824" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.523 2 DEBUG oslo_concurrency.lockutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.523 2 DEBUG oslo_concurrency.processutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.590 2 DEBUG oslo_concurrency.processutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.591 2 DEBUG nova.virt.disk.api [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Checking if we can resize image /var/lib/nova/instances/a309eb32-dc11-4530-b347-a465889a0cbb/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.592 2 DEBUG oslo_concurrency.processutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a309eb32-dc11-4530-b347-a465889a0cbb/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.643 2 DEBUG oslo_concurrency.processutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a309eb32-dc11-4530-b347-a465889a0cbb/disk --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.645 2 DEBUG nova.virt.disk.api [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Cannot resize image /var/lib/nova/instances/a309eb32-dc11-4530-b347-a465889a0cbb/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.646 2 DEBUG nova.objects.instance [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'migration_context' on Instance uuid a309eb32-dc11-4530-b347-a465889a0cbb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.660 2 DEBUG oslo_concurrency.processutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/a309eb32-dc11-4530-b347-a465889a0cbb/disk.config 485376 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.688 2 DEBUG oslo_concurrency.processutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/a309eb32-dc11-4530-b347-a465889a0cbb/disk.config 485376" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.691 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Copying file compute-0.ctlplane.example.com:/var/lib/nova/instances/a309eb32-dc11-4530-b347-a465889a0cbb/disk.config to /var/lib/nova/instances/a309eb32-dc11-4530-b347-a465889a0cbb copy_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:103#033[00m
Oct 13 11:56:31 np0005485008 nova_compute[192512]: 2025-10-13 15:56:31.691 2 DEBUG oslo_concurrency.processutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/a309eb32-dc11-4530-b347-a465889a0cbb/disk.config /var/lib/nova/instances/a309eb32-dc11-4530-b347-a465889a0cbb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:56:31 np0005485008 podman[220257]: 2025-10-13 15:56:31.769443207 +0000 UTC m=+0.069349358 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, release=1755695350, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, build-date=2025-08-20T13:12:41, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Oct 13 11:56:32 np0005485008 nova_compute[192512]: 2025-10-13 15:56:32.201 2 DEBUG oslo_concurrency.processutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/a309eb32-dc11-4530-b347-a465889a0cbb/disk.config /var/lib/nova/instances/a309eb32-dc11-4530-b347-a465889a0cbb" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:56:32 np0005485008 nova_compute[192512]: 2025-10-13 15:56:32.202 2 DEBUG nova.virt.libvirt.driver [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Oct 13 11:56:32 np0005485008 nova_compute[192512]: 2025-10-13 15:56:32.203 2 DEBUG nova.virt.libvirt.vif [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T15:55:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-2121343508',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-2121343508',id=13,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T15:55:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-8dkm9qfn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:55:31Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=a309eb32-dc11-4530-b347-a465889a0cbb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a", "address": "fa:16:3e:f4:dd:ea", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap3699c1ee-83", "ovs_interfaceid": "3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 11:56:32 np0005485008 nova_compute[192512]: 2025-10-13 15:56:32.204 2 DEBUG nova.network.os_vif_util [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converting VIF {"id": "3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a", "address": "fa:16:3e:f4:dd:ea", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap3699c1ee-83", "ovs_interfaceid": "3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:56:32 np0005485008 nova_compute[192512]: 2025-10-13 15:56:32.205 2 DEBUG nova.network.os_vif_util [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f4:dd:ea,bridge_name='br-int',has_traffic_filtering=True,id=3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3699c1ee-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:56:32 np0005485008 nova_compute[192512]: 2025-10-13 15:56:32.205 2 DEBUG os_vif [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:dd:ea,bridge_name='br-int',has_traffic_filtering=True,id=3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3699c1ee-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 11:56:32 np0005485008 nova_compute[192512]: 2025-10-13 15:56:32.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:32 np0005485008 nova_compute[192512]: 2025-10-13 15:56:32.207 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:56:32 np0005485008 nova_compute[192512]: 2025-10-13 15:56:32.207 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:56:32 np0005485008 nova_compute[192512]: 2025-10-13 15:56:32.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:32 np0005485008 nova_compute[192512]: 2025-10-13 15:56:32.210 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3699c1ee-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:56:32 np0005485008 nova_compute[192512]: 2025-10-13 15:56:32.211 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3699c1ee-83, col_values=(('external_ids', {'iface-id': '3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f4:dd:ea', 'vm-uuid': 'a309eb32-dc11-4530-b347-a465889a0cbb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:56:32 np0005485008 nova_compute[192512]: 2025-10-13 15:56:32.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:32 np0005485008 NetworkManager[51587]: <info>  [1760370992.2138] manager: (tap3699c1ee-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Oct 13 11:56:32 np0005485008 nova_compute[192512]: 2025-10-13 15:56:32.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 11:56:32 np0005485008 nova_compute[192512]: 2025-10-13 15:56:32.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:32 np0005485008 nova_compute[192512]: 2025-10-13 15:56:32.223 2 INFO os_vif [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:dd:ea,bridge_name='br-int',has_traffic_filtering=True,id=3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3699c1ee-83')#033[00m
Oct 13 11:56:32 np0005485008 nova_compute[192512]: 2025-10-13 15:56:32.223 2 DEBUG nova.virt.libvirt.driver [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Oct 13 11:56:32 np0005485008 nova_compute[192512]: 2025-10-13 15:56:32.224 2 DEBUG nova.compute.manager [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpk6yq6qu6',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='a309eb32-dc11-4530-b347-a465889a0cbb',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Oct 13 11:56:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:33.315 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:56:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:33.317 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 11:56:33 np0005485008 nova_compute[192512]: 2025-10-13 15:56:33.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:33.959 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:56:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:33.960 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:56:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:33.960 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:56:33 np0005485008 nova_compute[192512]: 2025-10-13 15:56:33.995 2 DEBUG nova.network.neutron [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Port 3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Oct 13 11:56:33 np0005485008 nova_compute[192512]: 2025-10-13 15:56:33.998 2 DEBUG nova.compute.manager [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpk6yq6qu6',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='a309eb32-dc11-4530-b347-a465889a0cbb',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Oct 13 11:56:34 np0005485008 systemd[1]: Starting libvirt proxy daemon...
Oct 13 11:56:34 np0005485008 systemd[1]: Started libvirt proxy daemon.
Oct 13 11:56:34 np0005485008 kernel: tap3699c1ee-83: entered promiscuous mode
Oct 13 11:56:34 np0005485008 NetworkManager[51587]: <info>  [1760370994.2686] manager: (tap3699c1ee-83): new Tun device (/org/freedesktop/NetworkManager/Devices/63)
Oct 13 11:56:34 np0005485008 ovn_controller[94758]: 2025-10-13T15:56:34Z|00154|binding|INFO|Claiming lport 3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a for this additional chassis.
Oct 13 11:56:34 np0005485008 ovn_controller[94758]: 2025-10-13T15:56:34Z|00155|binding|INFO|3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a: Claiming fa:16:3e:f4:dd:ea 10.100.0.4
Oct 13 11:56:34 np0005485008 nova_compute[192512]: 2025-10-13 15:56:34.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:34 np0005485008 ovn_controller[94758]: 2025-10-13T15:56:34Z|00156|binding|INFO|Setting lport 3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a ovn-installed in OVS
Oct 13 11:56:34 np0005485008 nova_compute[192512]: 2025-10-13 15:56:34.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:34 np0005485008 systemd-udevd[220317]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:56:34 np0005485008 systemd-machined[152551]: New machine qemu-13-instance-0000000d.
Oct 13 11:56:34 np0005485008 NetworkManager[51587]: <info>  [1760370994.3176] device (tap3699c1ee-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 11:56:34 np0005485008 NetworkManager[51587]: <info>  [1760370994.3184] device (tap3699c1ee-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 11:56:34 np0005485008 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Oct 13 11:56:35 np0005485008 podman[202884]: time="2025-10-13T15:56:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:56:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:56:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 11:56:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:56:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3464 "" "Go-http-client/1.1"
Oct 13 11:56:36 np0005485008 nova_compute[192512]: 2025-10-13 15:56:36.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:37 np0005485008 nova_compute[192512]: 2025-10-13 15:56:37.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:37 np0005485008 nova_compute[192512]: 2025-10-13 15:56:37.298 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370997.297839, a309eb32-dc11-4530-b347-a465889a0cbb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:56:37 np0005485008 nova_compute[192512]: 2025-10-13 15:56:37.298 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] VM Started (Lifecycle Event)#033[00m
Oct 13 11:56:37 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:37.321 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:56:37 np0005485008 nova_compute[192512]: 2025-10-13 15:56:37.345 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:56:37 np0005485008 nova_compute[192512]: 2025-10-13 15:56:37.918 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760370997.917634, a309eb32-dc11-4530-b347-a465889a0cbb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:56:37 np0005485008 nova_compute[192512]: 2025-10-13 15:56:37.918 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] VM Resumed (Lifecycle Event)#033[00m
Oct 13 11:56:37 np0005485008 nova_compute[192512]: 2025-10-13 15:56:37.994 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:56:37 np0005485008 nova_compute[192512]: 2025-10-13 15:56:37.998 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 11:56:38 np0005485008 nova_compute[192512]: 2025-10-13 15:56:38.027 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct 13 11:56:39 np0005485008 ovn_controller[94758]: 2025-10-13T15:56:39Z|00157|binding|INFO|Claiming lport 3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a for this chassis.
Oct 13 11:56:39 np0005485008 ovn_controller[94758]: 2025-10-13T15:56:39Z|00158|binding|INFO|3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a: Claiming fa:16:3e:f4:dd:ea 10.100.0.4
Oct 13 11:56:39 np0005485008 ovn_controller[94758]: 2025-10-13T15:56:39Z|00159|binding|INFO|Setting lport 3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a up in Southbound
Oct 13 11:56:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:39.886 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:dd:ea 10.100.0.4'], port_security=['fa:16:3e:f4:dd:ea 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a309eb32-dc11-4530-b347-a465889a0cbb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:56:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:39.888 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae bound to our chassis#033[00m
Oct 13 11:56:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:39.889 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae#033[00m
Oct 13 11:56:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:39.914 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[edb70147-e552-49e3-b58f-907733ca93b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:39.958 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[29d3dc58-8937-4648-b036-b02a632cd55c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:39.963 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[6b9d590a-ac91-4d0c-9190-55f0caab14b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:39.992 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[3cab328e-a3f2-4a15-ad49-617f2cdf26b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:40 np0005485008 nova_compute[192512]: 2025-10-13 15:56:40.010 2 INFO nova.compute.manager [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Post operation of migration started#033[00m
Oct 13 11:56:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:40.011 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[ca2912e4-6ebc-4ee3-b775-ec4cfb76657a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 15, 'tx_packets': 5, 'rx_bytes': 1126, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 15, 'tx_packets': 5, 'rx_bytes': 1126, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447978, 'reachable_time': 28188, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 220351, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:40.033 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[0eaec1cb-aa94-408b-89a5-9391ce3f4011]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447990, 'tstamp': 447990}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 220352, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447993, 'tstamp': 447993}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 220352, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:40.036 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:56:40 np0005485008 nova_compute[192512]: 2025-10-13 15:56:40.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:40 np0005485008 nova_compute[192512]: 2025-10-13 15:56:40.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:40.039 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39a43da9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:56:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:40.040 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:56:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:40.041 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39a43da9-c0, col_values=(('external_ids', {'iface-id': '5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:56:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:40.041 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:56:41 np0005485008 nova_compute[192512]: 2025-10-13 15:56:41.087 2 DEBUG oslo_concurrency.lockutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-a309eb32-dc11-4530-b347-a465889a0cbb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:56:41 np0005485008 nova_compute[192512]: 2025-10-13 15:56:41.087 2 DEBUG oslo_concurrency.lockutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-a309eb32-dc11-4530-b347-a465889a0cbb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:56:41 np0005485008 nova_compute[192512]: 2025-10-13 15:56:41.088 2 DEBUG nova.network.neutron [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 11:56:41 np0005485008 nova_compute[192512]: 2025-10-13 15:56:41.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:42 np0005485008 nova_compute[192512]: 2025-10-13 15:56:42.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:43 np0005485008 nova_compute[192512]: 2025-10-13 15:56:43.846 2 DEBUG nova.network.neutron [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Updating instance_info_cache with network_info: [{"id": "3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a", "address": "fa:16:3e:f4:dd:ea", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3699c1ee-83", "ovs_interfaceid": "3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:56:43 np0005485008 nova_compute[192512]: 2025-10-13 15:56:43.894 2 DEBUG oslo_concurrency.lockutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-a309eb32-dc11-4530-b347-a465889a0cbb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:56:43 np0005485008 nova_compute[192512]: 2025-10-13 15:56:43.948 2 DEBUG oslo_concurrency.lockutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:56:43 np0005485008 nova_compute[192512]: 2025-10-13 15:56:43.948 2 DEBUG oslo_concurrency.lockutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:56:43 np0005485008 nova_compute[192512]: 2025-10-13 15:56:43.949 2 DEBUG oslo_concurrency.lockutils [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:56:43 np0005485008 nova_compute[192512]: 2025-10-13 15:56:43.954 2 INFO nova.virt.libvirt.driver [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Oct 13 11:56:43 np0005485008 virtqemud[192082]: Domain id=13 name='instance-0000000d' uuid=a309eb32-dc11-4530-b347-a465889a0cbb is tainted: custom-monitor
Oct 13 11:56:44 np0005485008 nova_compute[192512]: 2025-10-13 15:56:44.961 2 INFO nova.virt.libvirt.driver [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Oct 13 11:56:45 np0005485008 nova_compute[192512]: 2025-10-13 15:56:45.967 2 INFO nova.virt.libvirt.driver [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Oct 13 11:56:45 np0005485008 nova_compute[192512]: 2025-10-13 15:56:45.975 2 DEBUG nova.compute.manager [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:56:46 np0005485008 nova_compute[192512]: 2025-10-13 15:56:46.009 2 DEBUG nova.objects.instance [None req-a460d00b-3371-4fed-ad55-86b2d20f03f7 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 13 11:56:46 np0005485008 nova_compute[192512]: 2025-10-13 15:56:46.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:47 np0005485008 nova_compute[192512]: 2025-10-13 15:56:47.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:48 np0005485008 podman[220358]: 2025-10-13 15:56:48.776333063 +0000 UTC m=+0.068105320 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 11:56:48 np0005485008 podman[220356]: 2025-10-13 15:56:48.777972693 +0000 UTC m=+0.077148201 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 13 11:56:48 np0005485008 podman[220359]: 2025-10-13 15:56:48.780018317 +0000 UTC m=+0.067654415 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 11:56:48 np0005485008 podman[220357]: 2025-10-13 15:56:48.792657333 +0000 UTC m=+0.078528416 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 11:56:48 np0005485008 podman[220363]: 2025-10-13 15:56:48.816526038 +0000 UTC m=+0.103803695 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 13 11:56:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:56:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:56:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:56:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:56:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:56:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:56:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:56:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:56:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:56:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:56:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:56:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.446 2 DEBUG oslo_concurrency.lockutils [None req-bc00fd23-8d90-435f-9359-e045229dd332 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.447 2 DEBUG oslo_concurrency.lockutils [None req-bc00fd23-8d90-435f-9359-e045229dd332 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.447 2 DEBUG oslo_concurrency.lockutils [None req-bc00fd23-8d90-435f-9359-e045229dd332 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.448 2 DEBUG oslo_concurrency.lockutils [None req-bc00fd23-8d90-435f-9359-e045229dd332 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.448 2 DEBUG oslo_concurrency.lockutils [None req-bc00fd23-8d90-435f-9359-e045229dd332 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.449 2 INFO nova.compute.manager [None req-bc00fd23-8d90-435f-9359-e045229dd332 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Terminating instance#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.451 2 DEBUG nova.compute.manager [None req-bc00fd23-8d90-435f-9359-e045229dd332 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 13 11:56:50 np0005485008 kernel: tap08102733-8c (unregistering): left promiscuous mode
Oct 13 11:56:50 np0005485008 NetworkManager[51587]: <info>  [1760371010.4800] device (tap08102733-8c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:50 np0005485008 ovn_controller[94758]: 2025-10-13T15:56:50Z|00160|binding|INFO|Releasing lport 08102733-8c09-4638-ad9d-7300416d4e60 from this chassis (sb_readonly=0)
Oct 13 11:56:50 np0005485008 ovn_controller[94758]: 2025-10-13T15:56:50Z|00161|binding|INFO|Setting lport 08102733-8c09-4638-ad9d-7300416d4e60 down in Southbound
Oct 13 11:56:50 np0005485008 ovn_controller[94758]: 2025-10-13T15:56:50Z|00162|binding|INFO|Removing iface tap08102733-8c ovn-installed in OVS
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.545 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:10:73 10.100.0.12'], port_security=['fa:16:3e:97:10:73 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'ee07a001-cef5-44fc-907a-ce9ed6b68b02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=08102733-8c09-4638-ad9d-7300416d4e60) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.547 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 08102733-8c09-4638-ad9d-7300416d4e60 in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae unbound from our chassis#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.548 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.570 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[eff055f7-f1fb-4428-858c-985ea3f065e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:50 np0005485008 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Oct 13 11:56:50 np0005485008 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000e.scope: Consumed 15.109s CPU time.
Oct 13 11:56:50 np0005485008 systemd-machined[152551]: Machine qemu-12-instance-0000000e terminated.
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.610 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[0caeacfc-d330-483c-9b32-4ac650a45549]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.614 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[8b6fc14b-c623-4c8f-9f73-007e616e2da1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.646 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[3a966678-ffd4-429c-b4da-bcbd3b6e031e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.664 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[67c0f57f-0b6a-43f9-9988-3f458df07f96]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447978, 'reachable_time': 28188, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 220472, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:50 np0005485008 kernel: tap08102733-8c: entered promiscuous mode
Oct 13 11:56:50 np0005485008 kernel: tap08102733-8c (unregistering): left promiscuous mode
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:50 np0005485008 ovn_controller[94758]: 2025-10-13T15:56:50Z|00163|binding|INFO|Claiming lport 08102733-8c09-4638-ad9d-7300416d4e60 for this chassis.
Oct 13 11:56:50 np0005485008 ovn_controller[94758]: 2025-10-13T15:56:50Z|00164|binding|INFO|08102733-8c09-4638-ad9d-7300416d4e60: Claiming fa:16:3e:97:10:73 10.100.0.12
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.689 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[4ebf4da9-1410-4603-aae4-00aa0279401c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447990, 'tstamp': 447990}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 220475, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447993, 'tstamp': 447993}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 220475, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.691 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.694 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:10:73 10.100.0.12'], port_security=['fa:16:3e:97:10:73 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'ee07a001-cef5-44fc-907a-ce9ed6b68b02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=08102733-8c09-4638-ad9d-7300416d4e60) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:56:50 np0005485008 ovn_controller[94758]: 2025-10-13T15:56:50Z|00165|binding|INFO|Setting lport 08102733-8c09-4638-ad9d-7300416d4e60 ovn-installed in OVS
Oct 13 11:56:50 np0005485008 ovn_controller[94758]: 2025-10-13T15:56:50Z|00166|binding|INFO|Setting lport 08102733-8c09-4638-ad9d-7300416d4e60 up in Southbound
Oct 13 11:56:50 np0005485008 ovn_controller[94758]: 2025-10-13T15:56:50Z|00167|binding|INFO|Releasing lport 08102733-8c09-4638-ad9d-7300416d4e60 from this chassis (sb_readonly=1)
Oct 13 11:56:50 np0005485008 ovn_controller[94758]: 2025-10-13T15:56:50Z|00168|if_status|INFO|Dropped 8 log messages in last 691 seconds (most recently, 684 seconds ago) due to excessive rate
Oct 13 11:56:50 np0005485008 ovn_controller[94758]: 2025-10-13T15:56:50Z|00169|if_status|INFO|Not setting lport 08102733-8c09-4638-ad9d-7300416d4e60 down as sb is readonly
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:50 np0005485008 ovn_controller[94758]: 2025-10-13T15:56:50Z|00170|binding|INFO|Removing iface tap08102733-8c ovn-installed in OVS
Oct 13 11:56:50 np0005485008 ovn_controller[94758]: 2025-10-13T15:56:50Z|00171|binding|INFO|Releasing lport 08102733-8c09-4638-ad9d-7300416d4e60 from this chassis (sb_readonly=0)
Oct 13 11:56:50 np0005485008 ovn_controller[94758]: 2025-10-13T15:56:50Z|00172|binding|INFO|Setting lport 08102733-8c09-4638-ad9d-7300416d4e60 down in Southbound
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.722 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:10:73 10.100.0.12'], port_security=['fa:16:3e:97:10:73 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'ee07a001-cef5-44fc-907a-ce9ed6b68b02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=08102733-8c09-4638-ad9d-7300416d4e60) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.724 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39a43da9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.724 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.725 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39a43da9-c0, col_values=(('external_ids', {'iface-id': '5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.725 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.727 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 08102733-8c09-4638-ad9d-7300416d4e60 in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae unbound from our chassis#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.728 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.734 2 INFO nova.virt.libvirt.driver [-] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Instance destroyed successfully.#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.734 2 DEBUG nova.objects.instance [None req-bc00fd23-8d90-435f-9359-e045229dd332 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lazy-loading 'resources' on Instance uuid ee07a001-cef5-44fc-907a-ce9ed6b68b02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.746 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[85b33948-7f4e-48d2-aa42-9a427a1cbc9b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.751 2 DEBUG nova.virt.libvirt.vif [None req-bc00fd23-8d90-435f-9359-e045229dd332 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T15:55:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1336596117',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1336596117',id=14,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T15:55:43Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-mmwu6mm5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T15:55:43Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=ee07a001-cef5-44fc-907a-ce9ed6b68b02,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "08102733-8c09-4638-ad9d-7300416d4e60", "address": "fa:16:3e:97:10:73", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08102733-8c", "ovs_interfaceid": "08102733-8c09-4638-ad9d-7300416d4e60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.751 2 DEBUG nova.network.os_vif_util [None req-bc00fd23-8d90-435f-9359-e045229dd332 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converting VIF {"id": "08102733-8c09-4638-ad9d-7300416d4e60", "address": "fa:16:3e:97:10:73", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap08102733-8c", "ovs_interfaceid": "08102733-8c09-4638-ad9d-7300416d4e60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.752 2 DEBUG nova.network.os_vif_util [None req-bc00fd23-8d90-435f-9359-e045229dd332 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:10:73,bridge_name='br-int',has_traffic_filtering=True,id=08102733-8c09-4638-ad9d-7300416d4e60,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08102733-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.752 2 DEBUG os_vif [None req-bc00fd23-8d90-435f-9359-e045229dd332 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:10:73,bridge_name='br-int',has_traffic_filtering=True,id=08102733-8c09-4638-ad9d-7300416d4e60,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08102733-8c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.754 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08102733-8c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.759 2 INFO os_vif [None req-bc00fd23-8d90-435f-9359-e045229dd332 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:10:73,bridge_name='br-int',has_traffic_filtering=True,id=08102733-8c09-4638-ad9d-7300416d4e60,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap08102733-8c')#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.759 2 INFO nova.virt.libvirt.driver [None req-bc00fd23-8d90-435f-9359-e045229dd332 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Deleting instance files /var/lib/nova/instances/ee07a001-cef5-44fc-907a-ce9ed6b68b02_del#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.760 2 INFO nova.virt.libvirt.driver [None req-bc00fd23-8d90-435f-9359-e045229dd332 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Deletion of /var/lib/nova/instances/ee07a001-cef5-44fc-907a-ce9ed6b68b02_del complete#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.781 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[422b2a65-f4d7-442a-afb7-66b9eafef5e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.785 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[c618289e-e7c7-4ebe-8186-f257e585adcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.819 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[e804cc6d-e54d-4a7c-8baf-95cd9e04b4ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.837 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[82b78e4d-16fd-41d8-86f0-68d1a89d7b4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 9, 'rx_bytes': 1756, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 9, 'rx_bytes': 1756, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447978, 'reachable_time': 28188, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 220496, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.849 2 INFO nova.compute.manager [None req-bc00fd23-8d90-435f-9359-e045229dd332 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Took 0.40 seconds to destroy the instance on the hypervisor.#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.850 2 DEBUG oslo.service.loopingcall [None req-bc00fd23-8d90-435f-9359-e045229dd332 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.850 2 DEBUG nova.compute.manager [-] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.850 2 DEBUG nova.network.neutron [-] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.860 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[e8fd5d35-efc9-47c1-b5a7-ea738d5d33a1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447990, 'tstamp': 447990}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 220497, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447993, 'tstamp': 447993}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 220497, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.862 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.866 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39a43da9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.866 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.866 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39a43da9-c0, col_values=(('external_ids', {'iface-id': '5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.866 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.867 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 08102733-8c09-4638-ad9d-7300416d4e60 in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae unbound from our chassis#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.869 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.885 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[825c7eca-dc72-491b-b9c1-d53e0b573632]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.919 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb10479-9d41-4555-b351-5627b9edad3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.922 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[b63fa8ba-36ee-45fd-b2ea-aa28baf141d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.951 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[ae7613cb-035b-4323-98be-8c4286873c51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.973 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[39517a2b-16ac-4ae7-b2eb-6042ac360d95]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 11, 'rx_bytes': 1756, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 11, 'rx_bytes': 1756, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447978, 'reachable_time': 28188, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 220503, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.987 2 DEBUG nova.compute.manager [req-01c7410b-e61c-4c76-99e3-fcab4011aae6 req-96f74a5c-7007-4183-b7d0-74fdd1db9ed9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Received event network-vif-unplugged-08102733-8c09-4638-ad9d-7300416d4e60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.988 2 DEBUG oslo_concurrency.lockutils [req-01c7410b-e61c-4c76-99e3-fcab4011aae6 req-96f74a5c-7007-4183-b7d0-74fdd1db9ed9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.988 2 DEBUG oslo_concurrency.lockutils [req-01c7410b-e61c-4c76-99e3-fcab4011aae6 req-96f74a5c-7007-4183-b7d0-74fdd1db9ed9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.988 2 DEBUG oslo_concurrency.lockutils [req-01c7410b-e61c-4c76-99e3-fcab4011aae6 req-96f74a5c-7007-4183-b7d0-74fdd1db9ed9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.989 2 DEBUG nova.compute.manager [req-01c7410b-e61c-4c76-99e3-fcab4011aae6 req-96f74a5c-7007-4183-b7d0-74fdd1db9ed9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] No waiting events found dispatching network-vif-unplugged-08102733-8c09-4638-ad9d-7300416d4e60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:56:50 np0005485008 nova_compute[192512]: 2025-10-13 15:56:50.989 2 DEBUG nova.compute.manager [req-01c7410b-e61c-4c76-99e3-fcab4011aae6 req-96f74a5c-7007-4183-b7d0-74fdd1db9ed9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Received event network-vif-unplugged-08102733-8c09-4638-ad9d-7300416d4e60 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.997 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[ed166c14-c43f-495a-a225-7c36f9cda3a6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447990, 'tstamp': 447990}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 220504, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447993, 'tstamp': 447993}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 220504, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:50 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:50.999 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:56:51 np0005485008 nova_compute[192512]: 2025-10-13 15:56:51.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:51 np0005485008 nova_compute[192512]: 2025-10-13 15:56:51.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:51 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:51.003 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39a43da9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:56:51 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:51.004 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:56:51 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:51.004 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39a43da9-c0, col_values=(('external_ids', {'iface-id': '5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:56:51 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:51.004 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:56:51 np0005485008 nova_compute[192512]: 2025-10-13 15:56:51.353 2 DEBUG nova.network.neutron [-] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:56:51 np0005485008 nova_compute[192512]: 2025-10-13 15:56:51.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:51 np0005485008 nova_compute[192512]: 2025-10-13 15:56:51.369 2 INFO nova.compute.manager [-] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Took 0.52 seconds to deallocate network for instance.#033[00m
Oct 13 11:56:51 np0005485008 nova_compute[192512]: 2025-10-13 15:56:51.415 2 DEBUG nova.compute.manager [req-fefe1f71-4f6a-49d0-82d3-2ef6878db1ab req-ec7a5fd9-563f-42d4-a7bb-4e9394b8d5af 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Received event network-vif-deleted-08102733-8c09-4638-ad9d-7300416d4e60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:56:51 np0005485008 nova_compute[192512]: 2025-10-13 15:56:51.423 2 DEBUG oslo_concurrency.lockutils [None req-bc00fd23-8d90-435f-9359-e045229dd332 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:56:51 np0005485008 nova_compute[192512]: 2025-10-13 15:56:51.424 2 DEBUG oslo_concurrency.lockutils [None req-bc00fd23-8d90-435f-9359-e045229dd332 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:56:51 np0005485008 nova_compute[192512]: 2025-10-13 15:56:51.497 2 DEBUG nova.compute.provider_tree [None req-bc00fd23-8d90-435f-9359-e045229dd332 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:56:51 np0005485008 nova_compute[192512]: 2025-10-13 15:56:51.515 2 DEBUG nova.scheduler.client.report [None req-bc00fd23-8d90-435f-9359-e045229dd332 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:56:51 np0005485008 nova_compute[192512]: 2025-10-13 15:56:51.541 2 DEBUG oslo_concurrency.lockutils [None req-bc00fd23-8d90-435f-9359-e045229dd332 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:56:51 np0005485008 nova_compute[192512]: 2025-10-13 15:56:51.569 2 INFO nova.scheduler.client.report [None req-bc00fd23-8d90-435f-9359-e045229dd332 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Deleted allocations for instance ee07a001-cef5-44fc-907a-ce9ed6b68b02#033[00m
Oct 13 11:56:51 np0005485008 nova_compute[192512]: 2025-10-13 15:56:51.647 2 DEBUG oslo_concurrency.lockutils [None req-bc00fd23-8d90-435f-9359-e045229dd332 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.307 2 DEBUG oslo_concurrency.lockutils [None req-0b45ebb9-ae14-49d7-a4e3-b6a932a80ae6 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "a309eb32-dc11-4530-b347-a465889a0cbb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.308 2 DEBUG oslo_concurrency.lockutils [None req-0b45ebb9-ae14-49d7-a4e3-b6a932a80ae6 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "a309eb32-dc11-4530-b347-a465889a0cbb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.308 2 DEBUG oslo_concurrency.lockutils [None req-0b45ebb9-ae14-49d7-a4e3-b6a932a80ae6 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "a309eb32-dc11-4530-b347-a465889a0cbb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.308 2 DEBUG oslo_concurrency.lockutils [None req-0b45ebb9-ae14-49d7-a4e3-b6a932a80ae6 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "a309eb32-dc11-4530-b347-a465889a0cbb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.309 2 DEBUG oslo_concurrency.lockutils [None req-0b45ebb9-ae14-49d7-a4e3-b6a932a80ae6 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "a309eb32-dc11-4530-b347-a465889a0cbb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.310 2 INFO nova.compute.manager [None req-0b45ebb9-ae14-49d7-a4e3-b6a932a80ae6 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Terminating instance#033[00m
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.311 2 DEBUG nova.compute.manager [None req-0b45ebb9-ae14-49d7-a4e3-b6a932a80ae6 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 13 11:56:52 np0005485008 kernel: tap3699c1ee-83 (unregistering): left promiscuous mode
Oct 13 11:56:52 np0005485008 NetworkManager[51587]: <info>  [1760371012.3327] device (tap3699c1ee-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 11:56:52 np0005485008 ovn_controller[94758]: 2025-10-13T15:56:52Z|00173|binding|INFO|Releasing lport 3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a from this chassis (sb_readonly=0)
Oct 13 11:56:52 np0005485008 ovn_controller[94758]: 2025-10-13T15:56:52Z|00174|binding|INFO|Setting lport 3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a down in Southbound
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:52 np0005485008 ovn_controller[94758]: 2025-10-13T15:56:52Z|00175|binding|INFO|Removing iface tap3699c1ee-83 ovn-installed in OVS
Oct 13 11:56:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:52.354 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:dd:ea 10.100.0.4'], port_security=['fa:16:3e:f4:dd:ea 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a309eb32-dc11-4530-b347-a465889a0cbb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:56:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:52.356 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae unbound from our chassis#033[00m
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:52.357 103642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 13 11:56:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:52.358 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[87397f8e-8833-48fd-b006-6af15aaa2c54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:52.359 103642 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae namespace which is not needed anymore#033[00m
Oct 13 11:56:52 np0005485008 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Oct 13 11:56:52 np0005485008 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 3.986s CPU time.
Oct 13 11:56:52 np0005485008 systemd-machined[152551]: Machine qemu-13-instance-0000000d terminated.
Oct 13 11:56:52 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[219982]: [NOTICE]   (219986) : haproxy version is 2.8.14-c23fe91
Oct 13 11:56:52 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[219982]: [NOTICE]   (219986) : path to executable is /usr/sbin/haproxy
Oct 13 11:56:52 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[219982]: [WARNING]  (219986) : Exiting Master process...
Oct 13 11:56:52 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[219982]: [ALERT]    (219986) : Current worker (219988) exited with code 143 (Terminated)
Oct 13 11:56:52 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[219982]: [WARNING]  (219986) : All workers exited. Exiting... (0)
Oct 13 11:56:52 np0005485008 systemd[1]: libpod-9922f9850f282314c9ee7a2fd6c9956bec39c87225624ac0752bd61b6b82192e.scope: Deactivated successfully.
Oct 13 11:56:52 np0005485008 podman[220527]: 2025-10-13 15:56:52.511219551 +0000 UTC m=+0.049104296 container died 9922f9850f282314c9ee7a2fd6c9956bec39c87225624ac0752bd61b6b82192e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 11:56:52 np0005485008 NetworkManager[51587]: <info>  [1760371012.5321] manager: (tap3699c1ee-83): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Oct 13 11:56:52 np0005485008 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9922f9850f282314c9ee7a2fd6c9956bec39c87225624ac0752bd61b6b82192e-userdata-shm.mount: Deactivated successfully.
Oct 13 11:56:52 np0005485008 systemd[1]: var-lib-containers-storage-overlay-7d05ee41532de44f8aa382ca171b8840b0db8ed161ad192f72626aa10c200dea-merged.mount: Deactivated successfully.
Oct 13 11:56:52 np0005485008 podman[220527]: 2025-10-13 15:56:52.567103818 +0000 UTC m=+0.104988573 container cleanup 9922f9850f282314c9ee7a2fd6c9956bec39c87225624ac0752bd61b6b82192e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.580 2 INFO nova.virt.libvirt.driver [-] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Instance destroyed successfully.#033[00m
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.581 2 DEBUG nova.objects.instance [None req-0b45ebb9-ae14-49d7-a4e3-b6a932a80ae6 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lazy-loading 'resources' on Instance uuid a309eb32-dc11-4530-b347-a465889a0cbb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:56:52 np0005485008 systemd[1]: libpod-conmon-9922f9850f282314c9ee7a2fd6c9956bec39c87225624ac0752bd61b6b82192e.scope: Deactivated successfully.
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.598 2 DEBUG nova.virt.libvirt.vif [None req-0b45ebb9-ae14-49d7-a4e3-b6a932a80ae6 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-13T15:55:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-2121343508',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-2121343508',id=13,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T15:55:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-8dkm9qfn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',clean_attempts='1',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T15:56:46Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=a309eb32-dc11-4530-b347-a465889a0cbb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a", "address": "fa:16:3e:f4:dd:ea", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3699c1ee-83", "ovs_interfaceid": "3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.599 2 DEBUG nova.network.os_vif_util [None req-0b45ebb9-ae14-49d7-a4e3-b6a932a80ae6 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converting VIF {"id": "3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a", "address": "fa:16:3e:f4:dd:ea", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3699c1ee-83", "ovs_interfaceid": "3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.599 2 DEBUG nova.network.os_vif_util [None req-0b45ebb9-ae14-49d7-a4e3-b6a932a80ae6 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f4:dd:ea,bridge_name='br-int',has_traffic_filtering=True,id=3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3699c1ee-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.600 2 DEBUG os_vif [None req-0b45ebb9-ae14-49d7-a4e3-b6a932a80ae6 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:dd:ea,bridge_name='br-int',has_traffic_filtering=True,id=3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3699c1ee-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.601 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3699c1ee-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.609 2 INFO os_vif [None req-0b45ebb9-ae14-49d7-a4e3-b6a932a80ae6 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:dd:ea,bridge_name='br-int',has_traffic_filtering=True,id=3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3699c1ee-83')#033[00m
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.610 2 INFO nova.virt.libvirt.driver [None req-0b45ebb9-ae14-49d7-a4e3-b6a932a80ae6 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Deleting instance files /var/lib/nova/instances/a309eb32-dc11-4530-b347-a465889a0cbb_del#033[00m
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.610 2 INFO nova.virt.libvirt.driver [None req-0b45ebb9-ae14-49d7-a4e3-b6a932a80ae6 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Deletion of /var/lib/nova/instances/a309eb32-dc11-4530-b347-a465889a0cbb_del complete#033[00m
Oct 13 11:56:52 np0005485008 podman[220573]: 2025-10-13 15:56:52.64972568 +0000 UTC m=+0.055759274 container remove 9922f9850f282314c9ee7a2fd6c9956bec39c87225624ac0752bd61b6b82192e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:56:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:52.656 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[8adefb80-d322-45c2-b8a6-55a1eca5a4ce]: (4, ('Mon Oct 13 03:56:52 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae (9922f9850f282314c9ee7a2fd6c9956bec39c87225624ac0752bd61b6b82192e)\n9922f9850f282314c9ee7a2fd6c9956bec39c87225624ac0752bd61b6b82192e\nMon Oct 13 03:56:52 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae (9922f9850f282314c9ee7a2fd6c9956bec39c87225624ac0752bd61b6b82192e)\n9922f9850f282314c9ee7a2fd6c9956bec39c87225624ac0752bd61b6b82192e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:52.658 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[9ab83794-b2f6-4ce2-bc78-756b409437ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:52.659 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:52 np0005485008 kernel: tap39a43da9-c0: left promiscuous mode
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:52.677 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[a87f27f3-f5b6-4d5f-bf68-4441b7353d9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:52.696 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[7386f4a0-44e5-4c76-a0fe-1a715a2d548d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:52.698 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[2681530a-096d-48a0-8cef-fcfe76333d96]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:52.713 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa49a99-129c-4016-948c-790097370129]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447970, 'reachable_time': 17307, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 220588, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:52.716 103757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 13 11:56:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:56:52.717 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[a2fbd9ca-283a-4b7d-9924-06f8b75fb788]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:56:52 np0005485008 systemd[1]: run-netns-ovnmeta\x2d39a43da9\x2dcf4c\x2d4fe3\x2dab73\x2dbf8705320dae.mount: Deactivated successfully.
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.741 2 INFO nova.compute.manager [None req-0b45ebb9-ae14-49d7-a4e3-b6a932a80ae6 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Took 0.43 seconds to destroy the instance on the hypervisor.#033[00m
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.742 2 DEBUG oslo.service.loopingcall [None req-0b45ebb9-ae14-49d7-a4e3-b6a932a80ae6 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.742 2 DEBUG nova.compute.manager [-] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 13 11:56:52 np0005485008 nova_compute[192512]: 2025-10-13 15:56:52.743 2 DEBUG nova.network.neutron [-] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.113 2 DEBUG nova.compute.manager [req-f3a56e7c-69b3-4530-9767-316e07c7ebd1 req-b2ee6a3a-50f2-454c-b19a-85d3ed189203 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Received event network-vif-plugged-08102733-8c09-4638-ad9d-7300416d4e60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.114 2 DEBUG oslo_concurrency.lockutils [req-f3a56e7c-69b3-4530-9767-316e07c7ebd1 req-b2ee6a3a-50f2-454c-b19a-85d3ed189203 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.114 2 DEBUG oslo_concurrency.lockutils [req-f3a56e7c-69b3-4530-9767-316e07c7ebd1 req-b2ee6a3a-50f2-454c-b19a-85d3ed189203 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.115 2 DEBUG oslo_concurrency.lockutils [req-f3a56e7c-69b3-4530-9767-316e07c7ebd1 req-b2ee6a3a-50f2-454c-b19a-85d3ed189203 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.115 2 DEBUG nova.compute.manager [req-f3a56e7c-69b3-4530-9767-316e07c7ebd1 req-b2ee6a3a-50f2-454c-b19a-85d3ed189203 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] No waiting events found dispatching network-vif-plugged-08102733-8c09-4638-ad9d-7300416d4e60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.115 2 WARNING nova.compute.manager [req-f3a56e7c-69b3-4530-9767-316e07c7ebd1 req-b2ee6a3a-50f2-454c-b19a-85d3ed189203 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Received unexpected event network-vif-plugged-08102733-8c09-4638-ad9d-7300416d4e60 for instance with vm_state deleted and task_state None.#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.115 2 DEBUG nova.compute.manager [req-f3a56e7c-69b3-4530-9767-316e07c7ebd1 req-b2ee6a3a-50f2-454c-b19a-85d3ed189203 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Received event network-vif-plugged-08102733-8c09-4638-ad9d-7300416d4e60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.116 2 DEBUG oslo_concurrency.lockutils [req-f3a56e7c-69b3-4530-9767-316e07c7ebd1 req-b2ee6a3a-50f2-454c-b19a-85d3ed189203 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.116 2 DEBUG oslo_concurrency.lockutils [req-f3a56e7c-69b3-4530-9767-316e07c7ebd1 req-b2ee6a3a-50f2-454c-b19a-85d3ed189203 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.116 2 DEBUG oslo_concurrency.lockutils [req-f3a56e7c-69b3-4530-9767-316e07c7ebd1 req-b2ee6a3a-50f2-454c-b19a-85d3ed189203 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "ee07a001-cef5-44fc-907a-ce9ed6b68b02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.116 2 DEBUG nova.compute.manager [req-f3a56e7c-69b3-4530-9767-316e07c7ebd1 req-b2ee6a3a-50f2-454c-b19a-85d3ed189203 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] No waiting events found dispatching network-vif-plugged-08102733-8c09-4638-ad9d-7300416d4e60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.116 2 WARNING nova.compute.manager [req-f3a56e7c-69b3-4530-9767-316e07c7ebd1 req-b2ee6a3a-50f2-454c-b19a-85d3ed189203 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Received unexpected event network-vif-plugged-08102733-8c09-4638-ad9d-7300416d4e60 for instance with vm_state deleted and task_state None.#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.286 2 DEBUG nova.network.neutron [-] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.305 2 INFO nova.compute.manager [-] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Took 0.56 seconds to deallocate network for instance.#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.370 2 DEBUG oslo_concurrency.lockutils [None req-0b45ebb9-ae14-49d7-a4e3-b6a932a80ae6 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.371 2 DEBUG oslo_concurrency.lockutils [None req-0b45ebb9-ae14-49d7-a4e3-b6a932a80ae6 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.377 2 DEBUG oslo_concurrency.lockutils [None req-0b45ebb9-ae14-49d7-a4e3-b6a932a80ae6 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.407 2 INFO nova.scheduler.client.report [None req-0b45ebb9-ae14-49d7-a4e3-b6a932a80ae6 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Deleted allocations for instance a309eb32-dc11-4530-b347-a465889a0cbb#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.494 2 DEBUG oslo_concurrency.lockutils [None req-0b45ebb9-ae14-49d7-a4e3-b6a932a80ae6 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "a309eb32-dc11-4530-b347-a465889a0cbb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.523 2 DEBUG nova.compute.manager [req-dcf6fb44-0167-4ca2-a559-69b141fd7dd8 req-aa78ad54-9ddf-4436-8f36-dc8a36262485 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Received event network-vif-unplugged-3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.524 2 DEBUG oslo_concurrency.lockutils [req-dcf6fb44-0167-4ca2-a559-69b141fd7dd8 req-aa78ad54-9ddf-4436-8f36-dc8a36262485 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "a309eb32-dc11-4530-b347-a465889a0cbb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.524 2 DEBUG oslo_concurrency.lockutils [req-dcf6fb44-0167-4ca2-a559-69b141fd7dd8 req-aa78ad54-9ddf-4436-8f36-dc8a36262485 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "a309eb32-dc11-4530-b347-a465889a0cbb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.525 2 DEBUG oslo_concurrency.lockutils [req-dcf6fb44-0167-4ca2-a559-69b141fd7dd8 req-aa78ad54-9ddf-4436-8f36-dc8a36262485 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "a309eb32-dc11-4530-b347-a465889a0cbb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.525 2 DEBUG nova.compute.manager [req-dcf6fb44-0167-4ca2-a559-69b141fd7dd8 req-aa78ad54-9ddf-4436-8f36-dc8a36262485 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] No waiting events found dispatching network-vif-unplugged-3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.525 2 WARNING nova.compute.manager [req-dcf6fb44-0167-4ca2-a559-69b141fd7dd8 req-aa78ad54-9ddf-4436-8f36-dc8a36262485 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Received unexpected event network-vif-unplugged-3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a for instance with vm_state deleted and task_state None.#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.526 2 DEBUG nova.compute.manager [req-dcf6fb44-0167-4ca2-a559-69b141fd7dd8 req-aa78ad54-9ddf-4436-8f36-dc8a36262485 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Received event network-vif-plugged-3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.526 2 DEBUG oslo_concurrency.lockutils [req-dcf6fb44-0167-4ca2-a559-69b141fd7dd8 req-aa78ad54-9ddf-4436-8f36-dc8a36262485 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "a309eb32-dc11-4530-b347-a465889a0cbb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.527 2 DEBUG oslo_concurrency.lockutils [req-dcf6fb44-0167-4ca2-a559-69b141fd7dd8 req-aa78ad54-9ddf-4436-8f36-dc8a36262485 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "a309eb32-dc11-4530-b347-a465889a0cbb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.527 2 DEBUG oslo_concurrency.lockutils [req-dcf6fb44-0167-4ca2-a559-69b141fd7dd8 req-aa78ad54-9ddf-4436-8f36-dc8a36262485 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "a309eb32-dc11-4530-b347-a465889a0cbb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.527 2 DEBUG nova.compute.manager [req-dcf6fb44-0167-4ca2-a559-69b141fd7dd8 req-aa78ad54-9ddf-4436-8f36-dc8a36262485 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] No waiting events found dispatching network-vif-plugged-3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.528 2 WARNING nova.compute.manager [req-dcf6fb44-0167-4ca2-a559-69b141fd7dd8 req-aa78ad54-9ddf-4436-8f36-dc8a36262485 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Received unexpected event network-vif-plugged-3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a for instance with vm_state deleted and task_state None.#033[00m
Oct 13 11:56:53 np0005485008 nova_compute[192512]: 2025-10-13 15:56:53.528 2 DEBUG nova.compute.manager [req-dcf6fb44-0167-4ca2-a559-69b141fd7dd8 req-aa78ad54-9ddf-4436-8f36-dc8a36262485 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Received event network-vif-deleted-3699c1ee-83c7-4d5d-bbdc-d29e6579ea5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:56:56 np0005485008 nova_compute[192512]: 2025-10-13 15:56:56.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:56:57 np0005485008 nova_compute[192512]: 2025-10-13 15:56:57.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:57:01 np0005485008 nova_compute[192512]: 2025-10-13 15:57:01.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:57:02 np0005485008 nova_compute[192512]: 2025-10-13 15:57:02.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:57:02 np0005485008 podman[220591]: 2025-10-13 15:57:02.776623243 +0000 UTC m=+0.070049360 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, release=1755695350, config_id=edpm, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 13 11:57:05 np0005485008 podman[202884]: time="2025-10-13T15:57:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:57:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:57:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:57:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:57:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2996 "" "Go-http-client/1.1"
Oct 13 11:57:05 np0005485008 nova_compute[192512]: 2025-10-13 15:57:05.733 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760371010.732206, ee07a001-cef5-44fc-907a-ce9ed6b68b02 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:57:05 np0005485008 nova_compute[192512]: 2025-10-13 15:57:05.734 2 INFO nova.compute.manager [-] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] VM Stopped (Lifecycle Event)#033[00m
Oct 13 11:57:06 np0005485008 nova_compute[192512]: 2025-10-13 15:57:06.259 2 DEBUG nova.compute.manager [None req-51980619-ec4f-4028-83dd-9d4108d0d5d5 - - - - - -] [instance: ee07a001-cef5-44fc-907a-ce9ed6b68b02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:57:06 np0005485008 nova_compute[192512]: 2025-10-13 15:57:06.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:57:07 np0005485008 nova_compute[192512]: 2025-10-13 15:57:07.578 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760371012.5761929, a309eb32-dc11-4530-b347-a465889a0cbb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:57:07 np0005485008 nova_compute[192512]: 2025-10-13 15:57:07.579 2 INFO nova.compute.manager [-] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] VM Stopped (Lifecycle Event)#033[00m
Oct 13 11:57:07 np0005485008 nova_compute[192512]: 2025-10-13 15:57:07.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:57:07 np0005485008 nova_compute[192512]: 2025-10-13 15:57:07.624 2 DEBUG nova.compute.manager [None req-9e269d0c-f1ae-4cbf-b405-1c8a238e609d - - - - - -] [instance: a309eb32-dc11-4530-b347-a465889a0cbb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:57:08 np0005485008 nova_compute[192512]: 2025-10-13 15:57:08.240 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:57:08 np0005485008 nova_compute[192512]: 2025-10-13 15:57:08.240 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 11:57:11 np0005485008 nova_compute[192512]: 2025-10-13 15:57:11.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:57:12 np0005485008 nova_compute[192512]: 2025-10-13 15:57:12.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:57:13 np0005485008 nova_compute[192512]: 2025-10-13 15:57:13.424 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:57:14 np0005485008 nova_compute[192512]: 2025-10-13 15:57:14.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:57:14 np0005485008 nova_compute[192512]: 2025-10-13 15:57:14.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:57:15 np0005485008 nova_compute[192512]: 2025-10-13 15:57:15.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:57:16 np0005485008 nova_compute[192512]: 2025-10-13 15:57:16.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:57:17 np0005485008 nova_compute[192512]: 2025-10-13 15:57:17.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:57:17 np0005485008 nova_compute[192512]: 2025-10-13 15:57:17.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:57:18 np0005485008 nova_compute[192512]: 2025-10-13 15:57:18.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:57:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:57:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:57:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:57:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:57:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:57:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:57:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:57:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:57:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:57:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:57:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:57:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:57:19 np0005485008 podman[220614]: 2025-10-13 15:57:19.772035178 +0000 UTC m=+0.067222682 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 13 11:57:19 np0005485008 podman[220613]: 2025-10-13 15:57:19.775752955 +0000 UTC m=+0.070419713 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 13 11:57:19 np0005485008 podman[220616]: 2025-10-13 15:57:19.78649295 +0000 UTC m=+0.066520280 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 11:57:19 np0005485008 podman[220615]: 2025-10-13 15:57:19.79385 +0000 UTC m=+0.081964742 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:57:19 np0005485008 podman[220627]: 2025-10-13 15:57:19.842276164 +0000 UTC m=+0.116357238 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 13 11:57:20 np0005485008 nova_compute[192512]: 2025-10-13 15:57:20.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:57:20 np0005485008 nova_compute[192512]: 2025-10-13 15:57:20.464 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:57:20 np0005485008 nova_compute[192512]: 2025-10-13 15:57:20.465 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 11:57:20 np0005485008 nova_compute[192512]: 2025-10-13 15:57:20.465 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 11:57:20 np0005485008 nova_compute[192512]: 2025-10-13 15:57:20.543 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 11:57:20 np0005485008 nova_compute[192512]: 2025-10-13 15:57:20.544 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:57:20 np0005485008 nova_compute[192512]: 2025-10-13 15:57:20.682 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:57:20 np0005485008 nova_compute[192512]: 2025-10-13 15:57:20.682 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:57:20 np0005485008 nova_compute[192512]: 2025-10-13 15:57:20.682 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:57:20 np0005485008 nova_compute[192512]: 2025-10-13 15:57:20.683 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:57:20 np0005485008 nova_compute[192512]: 2025-10-13 15:57:20.904 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:57:20 np0005485008 nova_compute[192512]: 2025-10-13 15:57:20.905 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5865MB free_disk=73.46583938598633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:57:20 np0005485008 nova_compute[192512]: 2025-10-13 15:57:20.906 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:57:20 np0005485008 nova_compute[192512]: 2025-10-13 15:57:20.906 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:57:21 np0005485008 nova_compute[192512]: 2025-10-13 15:57:21.195 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:57:21 np0005485008 nova_compute[192512]: 2025-10-13 15:57:21.196 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:57:21 np0005485008 nova_compute[192512]: 2025-10-13 15:57:21.218 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:57:21 np0005485008 nova_compute[192512]: 2025-10-13 15:57:21.318 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:57:21 np0005485008 nova_compute[192512]: 2025-10-13 15:57:21.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:57:21 np0005485008 nova_compute[192512]: 2025-10-13 15:57:21.667 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 11:57:21 np0005485008 nova_compute[192512]: 2025-10-13 15:57:21.668 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:57:22 np0005485008 nova_compute[192512]: 2025-10-13 15:57:22.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:57:23 np0005485008 ovn_controller[94758]: 2025-10-13T15:57:23Z|00176|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Oct 13 11:57:26 np0005485008 nova_compute[192512]: 2025-10-13 15:57:26.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:57:27 np0005485008 nova_compute[192512]: 2025-10-13 15:57:27.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:57:31 np0005485008 nova_compute[192512]: 2025-10-13 15:57:31.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:57:32 np0005485008 nova_compute[192512]: 2025-10-13 15:57:32.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:57:33 np0005485008 podman[220713]: 2025-10-13 15:57:33.756026939 +0000 UTC m=+0.063175185 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, version=9.6, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.openshift.expose-services=)
Oct 13 11:57:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:57:33.960 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:57:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:57:33.961 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:57:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:57:33.961 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:57:35 np0005485008 podman[202884]: time="2025-10-13T15:57:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:57:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:57:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:57:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:57:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3002 "" "Go-http-client/1.1"
Oct 13 11:57:36 np0005485008 nova_compute[192512]: 2025-10-13 15:57:36.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:57:37 np0005485008 nova_compute[192512]: 2025-10-13 15:57:37.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:57:41 np0005485008 nova_compute[192512]: 2025-10-13 15:57:41.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:57:42 np0005485008 nova_compute[192512]: 2025-10-13 15:57:42.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:57:46 np0005485008 nova_compute[192512]: 2025-10-13 15:57:46.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:57:47 np0005485008 nova_compute[192512]: 2025-10-13 15:57:47.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:57:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:57:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:57:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:57:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:57:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:57:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:57:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:57:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:57:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:57:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:57:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:57:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:57:50 np0005485008 podman[220734]: 2025-10-13 15:57:50.764415291 +0000 UTC m=+0.067139709 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Oct 13 11:57:50 np0005485008 podman[220737]: 2025-10-13 15:57:50.770976186 +0000 UTC m=+0.062194094 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 11:57:50 np0005485008 podman[220735]: 2025-10-13 15:57:50.7944579 +0000 UTC m=+0.092731399 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 11:57:50 np0005485008 podman[220736]: 2025-10-13 15:57:50.794870663 +0000 UTC m=+0.089034293 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 11:57:50 np0005485008 podman[220743]: 2025-10-13 15:57:50.806528938 +0000 UTC m=+0.094236497 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:57:51 np0005485008 nova_compute[192512]: 2025-10-13 15:57:51.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:57:52 np0005485008 nova_compute[192512]: 2025-10-13 15:57:52.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:57:56 np0005485008 nova_compute[192512]: 2025-10-13 15:57:56.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:57:57 np0005485008 nova_compute[192512]: 2025-10-13 15:57:57.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:01 np0005485008 nova_compute[192512]: 2025-10-13 15:58:01.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:02 np0005485008 nova_compute[192512]: 2025-10-13 15:58:02.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:04 np0005485008 podman[220834]: 2025-10-13 15:58:04.755742993 +0000 UTC m=+0.062723772 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vcs-type=git, architecture=x86_64, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm)
Oct 13 11:58:05 np0005485008 podman[202884]: time="2025-10-13T15:58:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:58:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:58:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:58:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:58:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3002 "" "Go-http-client/1.1"
Oct 13 11:58:06 np0005485008 nova_compute[192512]: 2025-10-13 15:58:06.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:07 np0005485008 nova_compute[192512]: 2025-10-13 15:58:07.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:08 np0005485008 nova_compute[192512]: 2025-10-13 15:58:08.552 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:58:08 np0005485008 nova_compute[192512]: 2025-10-13 15:58:08.553 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 11:58:11 np0005485008 nova_compute[192512]: 2025-10-13 15:58:11.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:12 np0005485008 nova_compute[192512]: 2025-10-13 15:58:12.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:15 np0005485008 nova_compute[192512]: 2025-10-13 15:58:15.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:58:15 np0005485008 nova_compute[192512]: 2025-10-13 15:58:15.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:58:16 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:16.316 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:58:16 np0005485008 nova_compute[192512]: 2025-10-13 15:58:16.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:16 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:16.317 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 11:58:16 np0005485008 nova_compute[192512]: 2025-10-13 15:58:16.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:16 np0005485008 nova_compute[192512]: 2025-10-13 15:58:16.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:58:17 np0005485008 nova_compute[192512]: 2025-10-13 15:58:17.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:58:17 np0005485008 nova_compute[192512]: 2025-10-13 15:58:17.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:18 np0005485008 nova_compute[192512]: 2025-10-13 15:58:18.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:58:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:58:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:58:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:58:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:58:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:58:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:58:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:58:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:58:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:58:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:58:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:58:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:58:19 np0005485008 nova_compute[192512]: 2025-10-13 15:58:19.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:58:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:20.320 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:58:21 np0005485008 nova_compute[192512]: 2025-10-13 15:58:21.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:21 np0005485008 nova_compute[192512]: 2025-10-13 15:58:21.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:58:21 np0005485008 nova_compute[192512]: 2025-10-13 15:58:21.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 11:58:21 np0005485008 nova_compute[192512]: 2025-10-13 15:58:21.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 11:58:21 np0005485008 nova_compute[192512]: 2025-10-13 15:58:21.454 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 11:58:21 np0005485008 podman[220858]: 2025-10-13 15:58:21.775316885 +0000 UTC m=+0.066306484 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 11:58:21 np0005485008 podman[220857]: 2025-10-13 15:58:21.775379037 +0000 UTC m=+0.071667321 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct 13 11:58:21 np0005485008 podman[220856]: 2025-10-13 15:58:21.781759716 +0000 UTC m=+0.083036626 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 13 11:58:21 np0005485008 podman[220864]: 2025-10-13 15:58:21.786414392 +0000 UTC m=+0.069762481 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 11:58:21 np0005485008 podman[220870]: 2025-10-13 15:58:21.829282191 +0000 UTC m=+0.099376156 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct 13 11:58:22 np0005485008 nova_compute[192512]: 2025-10-13 15:58:22.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:58:22 np0005485008 nova_compute[192512]: 2025-10-13 15:58:22.573 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:58:22 np0005485008 nova_compute[192512]: 2025-10-13 15:58:22.574 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:58:22 np0005485008 nova_compute[192512]: 2025-10-13 15:58:22.574 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:58:22 np0005485008 nova_compute[192512]: 2025-10-13 15:58:22.574 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:58:22 np0005485008 nova_compute[192512]: 2025-10-13 15:58:22.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:22 np0005485008 nova_compute[192512]: 2025-10-13 15:58:22.750 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:58:22 np0005485008 nova_compute[192512]: 2025-10-13 15:58:22.751 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5856MB free_disk=73.46602630615234GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:58:22 np0005485008 nova_compute[192512]: 2025-10-13 15:58:22.751 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:58:22 np0005485008 nova_compute[192512]: 2025-10-13 15:58:22.752 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:58:22 np0005485008 nova_compute[192512]: 2025-10-13 15:58:22.827 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:58:22 np0005485008 nova_compute[192512]: 2025-10-13 15:58:22.828 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:58:22 np0005485008 nova_compute[192512]: 2025-10-13 15:58:22.845 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing inventories for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 13 11:58:22 np0005485008 nova_compute[192512]: 2025-10-13 15:58:22.874 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating ProviderTree inventory for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 13 11:58:22 np0005485008 nova_compute[192512]: 2025-10-13 15:58:22.875 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating inventory in ProviderTree for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 13 11:58:22 np0005485008 nova_compute[192512]: 2025-10-13 15:58:22.891 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing aggregate associations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 13 11:58:22 np0005485008 nova_compute[192512]: 2025-10-13 15:58:22.918 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing trait associations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce, traits: HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,HW_CPU_X86_SSE42,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,HW_CPU_X86_BMI2,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 13 11:58:22 np0005485008 nova_compute[192512]: 2025-10-13 15:58:22.954 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:58:22 np0005485008 nova_compute[192512]: 2025-10-13 15:58:22.966 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:58:22 np0005485008 nova_compute[192512]: 2025-10-13 15:58:22.967 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 11:58:22 np0005485008 nova_compute[192512]: 2025-10-13 15:58:22.967 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:58:26 np0005485008 nova_compute[192512]: 2025-10-13 15:58:26.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:27 np0005485008 nova_compute[192512]: 2025-10-13 15:58:27.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:31 np0005485008 nova_compute[192512]: 2025-10-13 15:58:31.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:32 np0005485008 nova_compute[192512]: 2025-10-13 15:58:32.471 2 DEBUG oslo_concurrency.lockutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "1099298b-0725-435d-ae44-ced74a5c30ef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:58:32 np0005485008 nova_compute[192512]: 2025-10-13 15:58:32.471 2 DEBUG oslo_concurrency.lockutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "1099298b-0725-435d-ae44-ced74a5c30ef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:58:32 np0005485008 nova_compute[192512]: 2025-10-13 15:58:32.501 2 DEBUG nova.compute.manager [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 13 11:58:32 np0005485008 nova_compute[192512]: 2025-10-13 15:58:32.581 2 DEBUG oslo_concurrency.lockutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:58:32 np0005485008 nova_compute[192512]: 2025-10-13 15:58:32.582 2 DEBUG oslo_concurrency.lockutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:58:32 np0005485008 nova_compute[192512]: 2025-10-13 15:58:32.589 2 DEBUG nova.virt.hardware [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 13 11:58:32 np0005485008 nova_compute[192512]: 2025-10-13 15:58:32.590 2 INFO nova.compute.claims [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct 13 11:58:32 np0005485008 nova_compute[192512]: 2025-10-13 15:58:32.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:32 np0005485008 nova_compute[192512]: 2025-10-13 15:58:32.725 2 DEBUG nova.compute.provider_tree [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:58:32 np0005485008 nova_compute[192512]: 2025-10-13 15:58:32.739 2 DEBUG nova.scheduler.client.report [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:58:32 np0005485008 nova_compute[192512]: 2025-10-13 15:58:32.780 2 DEBUG oslo_concurrency.lockutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:58:32 np0005485008 nova_compute[192512]: 2025-10-13 15:58:32.781 2 DEBUG nova.compute.manager [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 13 11:58:32 np0005485008 nova_compute[192512]: 2025-10-13 15:58:32.861 2 DEBUG nova.compute.manager [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 13 11:58:32 np0005485008 nova_compute[192512]: 2025-10-13 15:58:32.862 2 DEBUG nova.network.neutron [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 13 11:58:32 np0005485008 nova_compute[192512]: 2025-10-13 15:58:32.880 2 INFO nova.virt.libvirt.driver [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 13 11:58:32 np0005485008 nova_compute[192512]: 2025-10-13 15:58:32.898 2 DEBUG nova.compute.manager [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.000 2 DEBUG nova.compute.manager [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.002 2 DEBUG nova.virt.libvirt.driver [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.003 2 INFO nova.virt.libvirt.driver [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Creating image(s)#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.003 2 DEBUG oslo_concurrency.lockutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "/var/lib/nova/instances/1099298b-0725-435d-ae44-ced74a5c30ef/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.004 2 DEBUG oslo_concurrency.lockutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "/var/lib/nova/instances/1099298b-0725-435d-ae44-ced74a5c30ef/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.004 2 DEBUG oslo_concurrency.lockutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "/var/lib/nova/instances/1099298b-0725-435d-ae44-ced74a5c30ef/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.016 2 DEBUG oslo_concurrency.processutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.085 2 DEBUG oslo_concurrency.processutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.086 2 DEBUG oslo_concurrency.lockutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.087 2 DEBUG oslo_concurrency.lockutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.100 2 DEBUG oslo_concurrency.processutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.164 2 DEBUG oslo_concurrency.processutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.165 2 DEBUG oslo_concurrency.processutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/1099298b-0725-435d-ae44-ced74a5c30ef/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.200 2 DEBUG oslo_concurrency.processutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/1099298b-0725-435d-ae44-ced74a5c30ef/disk 1073741824" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.201 2 DEBUG oslo_concurrency.lockutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.202 2 DEBUG oslo_concurrency.processutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.258 2 DEBUG oslo_concurrency.processutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.259 2 DEBUG nova.virt.disk.api [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Checking if we can resize image /var/lib/nova/instances/1099298b-0725-435d-ae44-ced74a5c30ef/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.260 2 DEBUG oslo_concurrency.processutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1099298b-0725-435d-ae44-ced74a5c30ef/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.323 2 DEBUG oslo_concurrency.processutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1099298b-0725-435d-ae44-ced74a5c30ef/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.324 2 DEBUG nova.virt.disk.api [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Cannot resize image /var/lib/nova/instances/1099298b-0725-435d-ae44-ced74a5c30ef/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.324 2 DEBUG nova.objects.instance [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lazy-loading 'migration_context' on Instance uuid 1099298b-0725-435d-ae44-ced74a5c30ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.343 2 DEBUG nova.virt.libvirt.driver [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.343 2 DEBUG nova.virt.libvirt.driver [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Ensure instance console log exists: /var/lib/nova/instances/1099298b-0725-435d-ae44-ced74a5c30ef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.344 2 DEBUG oslo_concurrency.lockutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.344 2 DEBUG oslo_concurrency.lockutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:58:33 np0005485008 nova_compute[192512]: 2025-10-13 15:58:33.344 2 DEBUG oslo_concurrency.lockutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:58:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:33.961 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:58:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:33.962 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:58:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:33.962 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:58:35 np0005485008 podman[202884]: time="2025-10-13T15:58:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:58:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:58:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 11:58:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:58:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2996 "" "Go-http-client/1.1"
Oct 13 11:58:35 np0005485008 podman[220977]: 2025-10-13 15:58:35.746676542 +0000 UTC m=+0.053905457 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., config_id=edpm, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=ubi9-minimal, version=9.6, io.buildah.version=1.33.7)
Oct 13 11:58:35 np0005485008 nova_compute[192512]: 2025-10-13 15:58:35.888 2 DEBUG nova.network.neutron [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Successfully created port: 06e48782-902b-456b-b7f1-ce2d72d27357 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 13 11:58:36 np0005485008 nova_compute[192512]: 2025-10-13 15:58:36.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:37 np0005485008 nova_compute[192512]: 2025-10-13 15:58:37.083 2 DEBUG nova.network.neutron [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Successfully updated port: 06e48782-902b-456b-b7f1-ce2d72d27357 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 13 11:58:37 np0005485008 nova_compute[192512]: 2025-10-13 15:58:37.098 2 DEBUG oslo_concurrency.lockutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "refresh_cache-1099298b-0725-435d-ae44-ced74a5c30ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:58:37 np0005485008 nova_compute[192512]: 2025-10-13 15:58:37.099 2 DEBUG oslo_concurrency.lockutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquired lock "refresh_cache-1099298b-0725-435d-ae44-ced74a5c30ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:58:37 np0005485008 nova_compute[192512]: 2025-10-13 15:58:37.099 2 DEBUG nova.network.neutron [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 11:58:37 np0005485008 nova_compute[192512]: 2025-10-13 15:58:37.322 2 DEBUG nova.compute.manager [req-a91b974f-f590-498b-9362-b749c5d53bd5 req-38c92b98-6082-4179-96b0-d25d4c4bdfee 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Received event network-changed-06e48782-902b-456b-b7f1-ce2d72d27357 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:58:37 np0005485008 nova_compute[192512]: 2025-10-13 15:58:37.323 2 DEBUG nova.compute.manager [req-a91b974f-f590-498b-9362-b749c5d53bd5 req-38c92b98-6082-4179-96b0-d25d4c4bdfee 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Refreshing instance network info cache due to event network-changed-06e48782-902b-456b-b7f1-ce2d72d27357. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 13 11:58:37 np0005485008 nova_compute[192512]: 2025-10-13 15:58:37.323 2 DEBUG oslo_concurrency.lockutils [req-a91b974f-f590-498b-9362-b749c5d53bd5 req-38c92b98-6082-4179-96b0-d25d4c4bdfee 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-1099298b-0725-435d-ae44-ced74a5c30ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:58:37 np0005485008 nova_compute[192512]: 2025-10-13 15:58:37.324 2 DEBUG nova.network.neutron [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 13 11:58:37 np0005485008 nova_compute[192512]: 2025-10-13 15:58:37.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:38 np0005485008 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.521 2 DEBUG nova.network.neutron [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Updating instance_info_cache with network_info: [{"id": "06e48782-902b-456b-b7f1-ce2d72d27357", "address": "fa:16:3e:11:bc:1d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e48782-90", "ovs_interfaceid": "06e48782-902b-456b-b7f1-ce2d72d27357", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.539 2 DEBUG oslo_concurrency.lockutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Releasing lock "refresh_cache-1099298b-0725-435d-ae44-ced74a5c30ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.540 2 DEBUG nova.compute.manager [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Instance network_info: |[{"id": "06e48782-902b-456b-b7f1-ce2d72d27357", "address": "fa:16:3e:11:bc:1d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e48782-90", "ovs_interfaceid": "06e48782-902b-456b-b7f1-ce2d72d27357", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.540 2 DEBUG oslo_concurrency.lockutils [req-a91b974f-f590-498b-9362-b749c5d53bd5 req-38c92b98-6082-4179-96b0-d25d4c4bdfee 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-1099298b-0725-435d-ae44-ced74a5c30ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.541 2 DEBUG nova.network.neutron [req-a91b974f-f590-498b-9362-b749c5d53bd5 req-38c92b98-6082-4179-96b0-d25d4c4bdfee 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Refreshing network info cache for port 06e48782-902b-456b-b7f1-ce2d72d27357 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.544 2 DEBUG nova.virt.libvirt.driver [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Start _get_guest_xml network_info=[{"id": "06e48782-902b-456b-b7f1-ce2d72d27357", "address": "fa:16:3e:11:bc:1d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e48782-90", "ovs_interfaceid": "06e48782-902b-456b-b7f1-ce2d72d27357", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'boot_index': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': 'dcd9fbd3-16ab-46e1-976e-0576b433c9d5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.548 2 WARNING nova.virt.libvirt.driver [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.553 2 DEBUG nova.virt.libvirt.host [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.554 2 DEBUG nova.virt.libvirt.host [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.557 2 DEBUG nova.virt.libvirt.host [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.557 2 DEBUG nova.virt.libvirt.host [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.558 2 DEBUG nova.virt.libvirt.driver [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.558 2 DEBUG nova.virt.hardware [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-13T15:39:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ddff5c88-cac9-460e-8ffb-1e9b9a7c2a59',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.559 2 DEBUG nova.virt.hardware [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.559 2 DEBUG nova.virt.hardware [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.559 2 DEBUG nova.virt.hardware [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.560 2 DEBUG nova.virt.hardware [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.560 2 DEBUG nova.virt.hardware [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.560 2 DEBUG nova.virt.hardware [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.560 2 DEBUG nova.virt.hardware [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.561 2 DEBUG nova.virt.hardware [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.561 2 DEBUG nova.virt.hardware [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.561 2 DEBUG nova.virt.hardware [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.565 2 DEBUG nova.virt.libvirt.vif [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T15:58:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1334267621',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1334267621',id=16,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-ssfj49th',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:58:32Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=1099298b-0725-435d-ae44-ced74a5c30ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06e48782-902b-456b-b7f1-ce2d72d27357", "address": "fa:16:3e:11:bc:1d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e48782-90", "ovs_interfaceid": "06e48782-902b-456b-b7f1-ce2d72d27357", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.565 2 DEBUG nova.network.os_vif_util [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converting VIF {"id": "06e48782-902b-456b-b7f1-ce2d72d27357", "address": "fa:16:3e:11:bc:1d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e48782-90", "ovs_interfaceid": "06e48782-902b-456b-b7f1-ce2d72d27357", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.565 2 DEBUG nova.network.os_vif_util [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:bc:1d,bridge_name='br-int',has_traffic_filtering=True,id=06e48782-902b-456b-b7f1-ce2d72d27357,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06e48782-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.566 2 DEBUG nova.objects.instance [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1099298b-0725-435d-ae44-ced74a5c30ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.588 2 DEBUG nova.virt.libvirt.driver [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] End _get_guest_xml xml=<domain type="kvm">
Oct 13 11:58:38 np0005485008 nova_compute[192512]:  <uuid>1099298b-0725-435d-ae44-ced74a5c30ef</uuid>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:  <name>instance-00000010</name>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:  <memory>131072</memory>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:  <vcpu>1</vcpu>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:  <metadata>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <nova:name>tempest-TestExecuteStrategies-server-1334267621</nova:name>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <nova:creationTime>2025-10-13 15:58:38</nova:creationTime>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <nova:flavor name="m1.nano">
Oct 13 11:58:38 np0005485008 nova_compute[192512]:        <nova:memory>128</nova:memory>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:        <nova:disk>1</nova:disk>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:        <nova:swap>0</nova:swap>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:        <nova:ephemeral>0</nova:ephemeral>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:        <nova:vcpus>1</nova:vcpus>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      </nova:flavor>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <nova:owner>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:        <nova:user uuid="3f85e781b03b405795a2079908bd2792">tempest-TestExecuteStrategies-1416319229-project-admin</nova:user>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:        <nova:project uuid="4d9418fd42c841d38cbfc7819a3fca65">tempest-TestExecuteStrategies-1416319229</nova:project>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      </nova:owner>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <nova:root type="image" uuid="dcd9fbd3-16ab-46e1-976e-0576b433c9d5"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <nova:ports>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:        <nova:port uuid="06e48782-902b-456b-b7f1-ce2d72d27357">
Oct 13 11:58:38 np0005485008 nova_compute[192512]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:        </nova:port>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      </nova:ports>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    </nova:instance>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:  </metadata>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:  <sysinfo type="smbios">
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <system>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <entry name="manufacturer">RDO</entry>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <entry name="product">OpenStack Compute</entry>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <entry name="serial">1099298b-0725-435d-ae44-ced74a5c30ef</entry>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <entry name="uuid">1099298b-0725-435d-ae44-ced74a5c30ef</entry>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <entry name="family">Virtual Machine</entry>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    </system>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:  </sysinfo>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:  <os>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <boot dev="hd"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <smbios mode="sysinfo"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:  </os>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:  <features>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <acpi/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <apic/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <vmcoreinfo/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:  </features>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:  <clock offset="utc">
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <timer name="pit" tickpolicy="delay"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <timer name="hpet" present="no"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:  </clock>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:  <cpu mode="host-model" match="exact">
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <topology sockets="1" cores="1" threads="1"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:  </cpu>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:  <devices>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <disk type="file" device="disk">
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <driver name="qemu" type="qcow2" cache="none"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/1099298b-0725-435d-ae44-ced74a5c30ef/disk"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <target dev="vda" bus="virtio"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    </disk>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <disk type="file" device="cdrom">
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <driver name="qemu" type="raw" cache="none"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/1099298b-0725-435d-ae44-ced74a5c30ef/disk.config"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <target dev="sda" bus="sata"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    </disk>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <interface type="ethernet">
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <mac address="fa:16:3e:11:bc:1d"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <driver name="vhost" rx_queue_size="512"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <mtu size="1442"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <target dev="tap06e48782-90"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    </interface>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <serial type="pty">
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <log file="/var/lib/nova/instances/1099298b-0725-435d-ae44-ced74a5c30ef/console.log" append="off"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    </serial>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <video>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    </video>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <input type="tablet" bus="usb"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <rng model="virtio">
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <backend model="random">/dev/urandom</backend>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    </rng>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <controller type="usb" index="0"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    <memballoon model="virtio">
Oct 13 11:58:38 np0005485008 nova_compute[192512]:      <stats period="10"/>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:    </memballoon>
Oct 13 11:58:38 np0005485008 nova_compute[192512]:  </devices>
Oct 13 11:58:38 np0005485008 nova_compute[192512]: </domain>
Oct 13 11:58:38 np0005485008 nova_compute[192512]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.590 2 DEBUG nova.compute.manager [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Preparing to wait for external event network-vif-plugged-06e48782-902b-456b-b7f1-ce2d72d27357 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.590 2 DEBUG oslo_concurrency.lockutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "1099298b-0725-435d-ae44-ced74a5c30ef-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.591 2 DEBUG oslo_concurrency.lockutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "1099298b-0725-435d-ae44-ced74a5c30ef-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.591 2 DEBUG oslo_concurrency.lockutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "1099298b-0725-435d-ae44-ced74a5c30ef-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.592 2 DEBUG nova.virt.libvirt.vif [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T15:58:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1334267621',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1334267621',id=16,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-ssfj49th',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:58:32Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=1099298b-0725-435d-ae44-ced74a5c30ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06e48782-902b-456b-b7f1-ce2d72d27357", "address": "fa:16:3e:11:bc:1d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e48782-90", "ovs_interfaceid": "06e48782-902b-456b-b7f1-ce2d72d27357", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.592 2 DEBUG nova.network.os_vif_util [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converting VIF {"id": "06e48782-902b-456b-b7f1-ce2d72d27357", "address": "fa:16:3e:11:bc:1d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e48782-90", "ovs_interfaceid": "06e48782-902b-456b-b7f1-ce2d72d27357", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.593 2 DEBUG nova.network.os_vif_util [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:bc:1d,bridge_name='br-int',has_traffic_filtering=True,id=06e48782-902b-456b-b7f1-ce2d72d27357,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06e48782-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.593 2 DEBUG os_vif [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:bc:1d,bridge_name='br-int',has_traffic_filtering=True,id=06e48782-902b-456b-b7f1-ce2d72d27357,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06e48782-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.594 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.595 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.598 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06e48782-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.599 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap06e48782-90, col_values=(('external_ids', {'iface-id': '06e48782-902b-456b-b7f1-ce2d72d27357', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:11:bc:1d', 'vm-uuid': '1099298b-0725-435d-ae44-ced74a5c30ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:38 np0005485008 NetworkManager[51587]: <info>  [1760371118.6020] manager: (tap06e48782-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.611 2 INFO os_vif [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:bc:1d,bridge_name='br-int',has_traffic_filtering=True,id=06e48782-902b-456b-b7f1-ce2d72d27357,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06e48782-90')#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.671 2 DEBUG nova.virt.libvirt.driver [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.671 2 DEBUG nova.virt.libvirt.driver [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.672 2 DEBUG nova.virt.libvirt.driver [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] No VIF found with MAC fa:16:3e:11:bc:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 13 11:58:38 np0005485008 nova_compute[192512]: 2025-10-13 15:58:38.672 2 INFO nova.virt.libvirt.driver [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Using config drive#033[00m
Oct 13 11:58:39 np0005485008 nova_compute[192512]: 2025-10-13 15:58:39.033 2 INFO nova.virt.libvirt.driver [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Creating config drive at /var/lib/nova/instances/1099298b-0725-435d-ae44-ced74a5c30ef/disk.config#033[00m
Oct 13 11:58:39 np0005485008 nova_compute[192512]: 2025-10-13 15:58:39.039 2 DEBUG oslo_concurrency.processutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1099298b-0725-435d-ae44-ced74a5c30ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptcp7qrep execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:58:39 np0005485008 nova_compute[192512]: 2025-10-13 15:58:39.170 2 DEBUG oslo_concurrency.processutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1099298b-0725-435d-ae44-ced74a5c30ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptcp7qrep" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:58:39 np0005485008 kernel: tap06e48782-90: entered promiscuous mode
Oct 13 11:58:39 np0005485008 NetworkManager[51587]: <info>  [1760371119.2318] manager: (tap06e48782-90): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Oct 13 11:58:39 np0005485008 ovn_controller[94758]: 2025-10-13T15:58:39Z|00177|binding|INFO|Claiming lport 06e48782-902b-456b-b7f1-ce2d72d27357 for this chassis.
Oct 13 11:58:39 np0005485008 ovn_controller[94758]: 2025-10-13T15:58:39Z|00178|binding|INFO|06e48782-902b-456b-b7f1-ce2d72d27357: Claiming fa:16:3e:11:bc:1d 10.100.0.14
Oct 13 11:58:39 np0005485008 nova_compute[192512]: 2025-10-13 15:58:39.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:39 np0005485008 ovn_controller[94758]: 2025-10-13T15:58:39Z|00179|binding|INFO|Setting lport 06e48782-902b-456b-b7f1-ce2d72d27357 ovn-installed in OVS
Oct 13 11:58:39 np0005485008 nova_compute[192512]: 2025-10-13 15:58:39.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:39 np0005485008 ovn_controller[94758]: 2025-10-13T15:58:39Z|00180|binding|INFO|Setting lport 06e48782-902b-456b-b7f1-ce2d72d27357 up in Southbound
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.248 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:bc:1d 10.100.0.14'], port_security=['fa:16:3e:11:bc:1d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1099298b-0725-435d-ae44-ced74a5c30ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=06e48782-902b-456b-b7f1-ce2d72d27357) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.249 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 06e48782-902b-456b-b7f1-ce2d72d27357 in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae bound to our chassis#033[00m
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.250 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae#033[00m
Oct 13 11:58:39 np0005485008 nova_compute[192512]: 2025-10-13 15:58:39.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.262 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[ab153694-05e4-4c7f-9b12-6ab5f73e70aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.263 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap39a43da9-c1 in ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.266 214965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap39a43da9-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.266 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[333d881a-1692-43e7-a506-c61714827ea9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.267 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3229e0-7fea-4ec0-8226-1685869372b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:58:39 np0005485008 systemd-udevd[221018]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.279 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[8e466fc7-8b5b-467b-aff2-f00739a2264d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:58:39 np0005485008 NetworkManager[51587]: <info>  [1760371119.2847] device (tap06e48782-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 11:58:39 np0005485008 NetworkManager[51587]: <info>  [1760371119.2856] device (tap06e48782-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 11:58:39 np0005485008 systemd-machined[152551]: New machine qemu-14-instance-00000010.
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.307 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[da27ae89-5663-4624-b6b3-051a7c501d0e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:58:39 np0005485008 systemd[1]: Started Virtual Machine qemu-14-instance-00000010.
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.343 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[ef5f498d-170e-491b-a932-acd9e0354788]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.350 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[7a950cff-d460-4336-9243-0b68b9db4396]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:58:39 np0005485008 NetworkManager[51587]: <info>  [1760371119.3518] manager: (tap39a43da9-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/67)
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.387 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[62fb95da-bafc-4947-977e-fd8ccbab2e47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.391 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[e41a664a-7a92-4136-be64-80f0e9f0dd3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:58:39 np0005485008 NetworkManager[51587]: <info>  [1760371119.4180] device (tap39a43da9-c0): carrier: link connected
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.424 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[d80af252-9584-42ea-8f9d-06b2c8ed16cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.444 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[0722d85c-6837-4535-811c-005c5d669253]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465708, 'reachable_time': 35283, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 221051, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.465 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[8b5bcb6c-97bf-4805-93a9-fa0a78e2ad4e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:43e9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465708, 'tstamp': 465708}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 221052, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.485 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[b38a8ff5-ac9b-4db2-96e0-6dc388f8007c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465708, 'reachable_time': 35283, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 221053, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:58:39 np0005485008 nova_compute[192512]: 2025-10-13 15:58:39.501 2 DEBUG nova.compute.manager [req-ed5b207a-f024-48ef-ba73-b0f2d7023954 req-62dfcd9d-4fc2-4ba9-9dbf-8bec3915eff6 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Received event network-vif-plugged-06e48782-902b-456b-b7f1-ce2d72d27357 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:58:39 np0005485008 nova_compute[192512]: 2025-10-13 15:58:39.501 2 DEBUG oslo_concurrency.lockutils [req-ed5b207a-f024-48ef-ba73-b0f2d7023954 req-62dfcd9d-4fc2-4ba9-9dbf-8bec3915eff6 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "1099298b-0725-435d-ae44-ced74a5c30ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:58:39 np0005485008 nova_compute[192512]: 2025-10-13 15:58:39.502 2 DEBUG oslo_concurrency.lockutils [req-ed5b207a-f024-48ef-ba73-b0f2d7023954 req-62dfcd9d-4fc2-4ba9-9dbf-8bec3915eff6 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "1099298b-0725-435d-ae44-ced74a5c30ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:58:39 np0005485008 nova_compute[192512]: 2025-10-13 15:58:39.502 2 DEBUG oslo_concurrency.lockutils [req-ed5b207a-f024-48ef-ba73-b0f2d7023954 req-62dfcd9d-4fc2-4ba9-9dbf-8bec3915eff6 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "1099298b-0725-435d-ae44-ced74a5c30ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:58:39 np0005485008 nova_compute[192512]: 2025-10-13 15:58:39.502 2 DEBUG nova.compute.manager [req-ed5b207a-f024-48ef-ba73-b0f2d7023954 req-62dfcd9d-4fc2-4ba9-9dbf-8bec3915eff6 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Processing event network-vif-plugged-06e48782-902b-456b-b7f1-ce2d72d27357 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.522 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[bed312ae-f292-4022-8094-6a1b820bb234]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.589 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[58d931ce-7f6a-4d01-bd97-127cdfd2056c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.591 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.592 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.592 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39a43da9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:58:39 np0005485008 NetworkManager[51587]: <info>  [1760371119.5954] manager: (tap39a43da9-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Oct 13 11:58:39 np0005485008 kernel: tap39a43da9-c0: entered promiscuous mode
Oct 13 11:58:39 np0005485008 nova_compute[192512]: 2025-10-13 15:58:39.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.598 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39a43da9-c0, col_values=(('external_ids', {'iface-id': '5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:58:39 np0005485008 nova_compute[192512]: 2025-10-13 15:58:39.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:39 np0005485008 nova_compute[192512]: 2025-10-13 15:58:39.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.601 103642 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/39a43da9-cf4c-4fe3-ab73-bf8705320dae.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/39a43da9-cf4c-4fe3-ab73-bf8705320dae.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 13 11:58:39 np0005485008 ovn_controller[94758]: 2025-10-13T15:58:39Z|00181|binding|INFO|Releasing lport 5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182 from this chassis (sb_readonly=0)
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.604 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[0962fe47-0cbc-4ab3-b426-85d60f49c29b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.605 103642 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: global
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]:    log         /dev/log local0 debug
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]:    log-tag     haproxy-metadata-proxy-39a43da9-cf4c-4fe3-ab73-bf8705320dae
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]:    user        root
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]:    group       root
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]:    maxconn     1024
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]:    pidfile     /var/lib/neutron/external/pids/39a43da9-cf4c-4fe3-ab73-bf8705320dae.pid.haproxy
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]:    daemon
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: defaults
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]:    log global
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]:    mode http
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]:    option httplog
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]:    option dontlognull
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]:    option http-server-close
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]:    option forwardfor
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]:    retries                 3
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]:    timeout http-request    30s
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]:    timeout connect         30s
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]:    timeout client          32s
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]:    timeout server          32s
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]:    timeout http-keep-alive 30s
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: listen listener
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]:    bind 169.254.169.254:80
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]:    server metadata /var/lib/neutron/metadata_proxy
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]:    http-request add-header X-OVN-Network-ID 39a43da9-cf4c-4fe3-ab73-bf8705320dae
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 13 11:58:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:58:39.606 103642 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'env', 'PROCESS_TAG=haproxy-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/39a43da9-cf4c-4fe3-ab73-bf8705320dae.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 13 11:58:39 np0005485008 nova_compute[192512]: 2025-10-13 15:58:39.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:39 np0005485008 podman[221084]: 2025-10-13 15:58:39.968970593 +0000 UTC m=+0.045360099 container create 3f704fc26de85dc79a9cd93502fd2af7b0921fb4e7bd984f4b1408f1857f4380 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:58:40 np0005485008 systemd[1]: Started libpod-conmon-3f704fc26de85dc79a9cd93502fd2af7b0921fb4e7bd984f4b1408f1857f4380.scope.
Oct 13 11:58:40 np0005485008 systemd[1]: Started libcrun container.
Oct 13 11:58:40 np0005485008 podman[221084]: 2025-10-13 15:58:39.945090876 +0000 UTC m=+0.021480402 image pull f5d8e43d5fd113645e71c7e8980543c233298a7561a4c95ab3af27cb119e70b7 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 13 11:58:40 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f7163578d9232f3f54f0eba23ed6abde77f1a49563696e1e7c91bd054b9608a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 11:58:40 np0005485008 podman[221084]: 2025-10-13 15:58:40.057173899 +0000 UTC m=+0.133563435 container init 3f704fc26de85dc79a9cd93502fd2af7b0921fb4e7bd984f4b1408f1857f4380 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Oct 13 11:58:40 np0005485008 podman[221084]: 2025-10-13 15:58:40.063400264 +0000 UTC m=+0.139789770 container start 3f704fc26de85dc79a9cd93502fd2af7b0921fb4e7bd984f4b1408f1857f4380 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 11:58:40 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[221099]: [NOTICE]   (221106) : New worker (221111) forked
Oct 13 11:58:40 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[221099]: [NOTICE]   (221106) : Loading success.
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.127 2 DEBUG nova.network.neutron [req-a91b974f-f590-498b-9362-b749c5d53bd5 req-38c92b98-6082-4179-96b0-d25d4c4bdfee 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Updated VIF entry in instance network info cache for port 06e48782-902b-456b-b7f1-ce2d72d27357. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.128 2 DEBUG nova.network.neutron [req-a91b974f-f590-498b-9362-b749c5d53bd5 req-38c92b98-6082-4179-96b0-d25d4c4bdfee 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Updating instance_info_cache with network_info: [{"id": "06e48782-902b-456b-b7f1-ce2d72d27357", "address": "fa:16:3e:11:bc:1d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e48782-90", "ovs_interfaceid": "06e48782-902b-456b-b7f1-ce2d72d27357", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.147 2 DEBUG oslo_concurrency.lockutils [req-a91b974f-f590-498b-9362-b749c5d53bd5 req-38c92b98-6082-4179-96b0-d25d4c4bdfee 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-1099298b-0725-435d-ae44-ced74a5c30ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.605 2 DEBUG nova.compute.manager [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.607 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760371120.6045914, 1099298b-0725-435d-ae44-ced74a5c30ef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.607 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] VM Started (Lifecycle Event)#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.610 2 DEBUG nova.virt.libvirt.driver [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.615 2 INFO nova.virt.libvirt.driver [-] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Instance spawned successfully.#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.615 2 DEBUG nova.virt.libvirt.driver [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.663 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.666 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.703 2 DEBUG nova.virt.libvirt.driver [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.704 2 DEBUG nova.virt.libvirt.driver [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.704 2 DEBUG nova.virt.libvirt.driver [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.705 2 DEBUG nova.virt.libvirt.driver [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.705 2 DEBUG nova.virt.libvirt.driver [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.706 2 DEBUG nova.virt.libvirt.driver [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.743 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.743 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760371120.604982, 1099298b-0725-435d-ae44-ced74a5c30ef => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.744 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] VM Paused (Lifecycle Event)#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.798 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.801 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760371120.6098323, 1099298b-0725-435d-ae44-ced74a5c30ef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.801 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] VM Resumed (Lifecycle Event)#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.846 2 INFO nova.compute.manager [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Took 7.85 seconds to spawn the instance on the hypervisor.#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.847 2 DEBUG nova.compute.manager [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.891 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.897 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 11:58:40 np0005485008 nova_compute[192512]: 2025-10-13 15:58:40.973 2 INFO nova.compute.manager [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Took 8.42 seconds to build instance.#033[00m
Oct 13 11:58:41 np0005485008 nova_compute[192512]: 2025-10-13 15:58:41.121 2 DEBUG oslo_concurrency.lockutils [None req-2982f060-ed29-4621-81d6-102122d8dc35 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "1099298b-0725-435d-ae44-ced74a5c30ef" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:58:41 np0005485008 nova_compute[192512]: 2025-10-13 15:58:41.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:41 np0005485008 nova_compute[192512]: 2025-10-13 15:58:41.568 2 DEBUG nova.compute.manager [req-e35f95f4-aacf-415c-b1dc-d006f8204158 req-1538f030-76e2-4390-a6ac-c7e90cdb8719 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Received event network-vif-plugged-06e48782-902b-456b-b7f1-ce2d72d27357 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:58:41 np0005485008 nova_compute[192512]: 2025-10-13 15:58:41.568 2 DEBUG oslo_concurrency.lockutils [req-e35f95f4-aacf-415c-b1dc-d006f8204158 req-1538f030-76e2-4390-a6ac-c7e90cdb8719 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "1099298b-0725-435d-ae44-ced74a5c30ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:58:41 np0005485008 nova_compute[192512]: 2025-10-13 15:58:41.569 2 DEBUG oslo_concurrency.lockutils [req-e35f95f4-aacf-415c-b1dc-d006f8204158 req-1538f030-76e2-4390-a6ac-c7e90cdb8719 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "1099298b-0725-435d-ae44-ced74a5c30ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:58:41 np0005485008 nova_compute[192512]: 2025-10-13 15:58:41.569 2 DEBUG oslo_concurrency.lockutils [req-e35f95f4-aacf-415c-b1dc-d006f8204158 req-1538f030-76e2-4390-a6ac-c7e90cdb8719 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "1099298b-0725-435d-ae44-ced74a5c30ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:58:41 np0005485008 nova_compute[192512]: 2025-10-13 15:58:41.569 2 DEBUG nova.compute.manager [req-e35f95f4-aacf-415c-b1dc-d006f8204158 req-1538f030-76e2-4390-a6ac-c7e90cdb8719 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] No waiting events found dispatching network-vif-plugged-06e48782-902b-456b-b7f1-ce2d72d27357 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:58:41 np0005485008 nova_compute[192512]: 2025-10-13 15:58:41.569 2 WARNING nova.compute.manager [req-e35f95f4-aacf-415c-b1dc-d006f8204158 req-1538f030-76e2-4390-a6ac-c7e90cdb8719 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Received unexpected event network-vif-plugged-06e48782-902b-456b-b7f1-ce2d72d27357 for instance with vm_state active and task_state None.#033[00m
Oct 13 11:58:43 np0005485008 nova_compute[192512]: 2025-10-13 15:58:43.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:46 np0005485008 nova_compute[192512]: 2025-10-13 15:58:46.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:48 np0005485008 nova_compute[192512]: 2025-10-13 15:58:48.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:58:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:58:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:58:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:58:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:58:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:58:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:58:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:58:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:58:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:58:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:58:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:58:51 np0005485008 nova_compute[192512]: 2025-10-13 15:58:51.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:51 np0005485008 ovn_controller[94758]: 2025-10-13T15:58:51Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:11:bc:1d 10.100.0.14
Oct 13 11:58:51 np0005485008 ovn_controller[94758]: 2025-10-13T15:58:51Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:11:bc:1d 10.100.0.14
Oct 13 11:58:52 np0005485008 podman[221136]: 2025-10-13 15:58:52.785945748 +0000 UTC m=+0.080971902 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 11:58:52 np0005485008 podman[221138]: 2025-10-13 15:58:52.78920519 +0000 UTC m=+0.074819540 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:58:52 np0005485008 podman[221137]: 2025-10-13 15:58:52.789921292 +0000 UTC m=+0.076313806 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 11:58:52 np0005485008 podman[221145]: 2025-10-13 15:58:52.824530984 +0000 UTC m=+0.104137976 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 11:58:52 np0005485008 podman[221139]: 2025-10-13 15:58:52.827564849 +0000 UTC m=+0.107137500 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 11:58:53 np0005485008 nova_compute[192512]: 2025-10-13 15:58:53.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:56 np0005485008 nova_compute[192512]: 2025-10-13 15:58:56.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:58:58 np0005485008 nova_compute[192512]: 2025-10-13 15:58:58.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:01 np0005485008 nova_compute[192512]: 2025-10-13 15:59:01.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:03 np0005485008 nova_compute[192512]: 2025-10-13 15:59:03.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:05 np0005485008 podman[202884]: time="2025-10-13T15:59:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:59:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:59:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 11:59:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:59:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3470 "" "Go-http-client/1.1"
Oct 13 11:59:06 np0005485008 nova_compute[192512]: 2025-10-13 15:59:06.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:06 np0005485008 podman[221233]: 2025-10-13 15:59:06.777381511 +0000 UTC m=+0.072737885 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, maintainer=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, vcs-type=git, architecture=x86_64)
Oct 13 11:59:08 np0005485008 nova_compute[192512]: 2025-10-13 15:59:08.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:08 np0005485008 nova_compute[192512]: 2025-10-13 15:59:08.968 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:59:08 np0005485008 nova_compute[192512]: 2025-10-13 15:59:08.968 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 11:59:09 np0005485008 ovn_controller[94758]: 2025-10-13T15:59:09Z|00182|memory_trim|INFO|Detected inactivity (last active 30017 ms ago): trimming memory
Oct 13 11:59:09 np0005485008 nova_compute[192512]: 2025-10-13 15:59:09.738 2 DEBUG nova.virt.libvirt.driver [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Creating tmpfile /var/lib/nova/instances/tmp42kkvgep to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Oct 13 11:59:09 np0005485008 nova_compute[192512]: 2025-10-13 15:59:09.740 2 DEBUG nova.compute.manager [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp42kkvgep',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Oct 13 11:59:11 np0005485008 nova_compute[192512]: 2025-10-13 15:59:11.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:12 np0005485008 nova_compute[192512]: 2025-10-13 15:59:12.262 2 DEBUG nova.compute.manager [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp42kkvgep',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='430d4b72-3a50-4985-aa59-b15ad0d05c6b',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Oct 13 11:59:12 np0005485008 nova_compute[192512]: 2025-10-13 15:59:12.295 2 DEBUG oslo_concurrency.lockutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-430d4b72-3a50-4985-aa59-b15ad0d05c6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:59:12 np0005485008 nova_compute[192512]: 2025-10-13 15:59:12.296 2 DEBUG oslo_concurrency.lockutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-430d4b72-3a50-4985-aa59-b15ad0d05c6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:59:12 np0005485008 nova_compute[192512]: 2025-10-13 15:59:12.296 2 DEBUG nova.network.neutron [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 11:59:13 np0005485008 nova_compute[192512]: 2025-10-13 15:59:13.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.343 2 DEBUG nova.network.neutron [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Updating instance_info_cache with network_info: [{"id": "7a2143c2-f397-465e-b973-2be27a892e0c", "address": "fa:16:3e:e0:a6:88", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a2143c2-f3", "ovs_interfaceid": "7a2143c2-f397-465e-b973-2be27a892e0c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.370 2 DEBUG oslo_concurrency.lockutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-430d4b72-3a50-4985-aa59-b15ad0d05c6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.371 2 DEBUG nova.virt.libvirt.driver [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp42kkvgep',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='430d4b72-3a50-4985-aa59-b15ad0d05c6b',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.372 2 DEBUG nova.virt.libvirt.driver [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Creating instance directory: /var/lib/nova/instances/430d4b72-3a50-4985-aa59-b15ad0d05c6b pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.372 2 DEBUG nova.virt.libvirt.driver [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Creating disk.info with the contents: {'/var/lib/nova/instances/430d4b72-3a50-4985-aa59-b15ad0d05c6b/disk': 'qcow2', '/var/lib/nova/instances/430d4b72-3a50-4985-aa59-b15ad0d05c6b/disk.config': 'raw'} pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10854#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.373 2 DEBUG nova.virt.libvirt.driver [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10864#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.373 2 DEBUG nova.objects.instance [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 430d4b72-3a50-4985-aa59-b15ad0d05c6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.407 2 DEBUG oslo_concurrency.processutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.467 2 DEBUG oslo_concurrency.processutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.468 2 DEBUG oslo_concurrency.lockutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.469 2 DEBUG oslo_concurrency.lockutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.482 2 DEBUG oslo_concurrency.processutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.544 2 DEBUG oslo_concurrency.processutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.546 2 DEBUG oslo_concurrency.processutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/430d4b72-3a50-4985-aa59-b15ad0d05c6b/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.582 2 DEBUG oslo_concurrency.processutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/430d4b72-3a50-4985-aa59-b15ad0d05c6b/disk 1073741824" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.583 2 DEBUG oslo_concurrency.lockutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.585 2 DEBUG oslo_concurrency.processutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.648 2 DEBUG oslo_concurrency.processutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.649 2 DEBUG nova.virt.disk.api [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Checking if we can resize image /var/lib/nova/instances/430d4b72-3a50-4985-aa59-b15ad0d05c6b/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.649 2 DEBUG oslo_concurrency.processutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/430d4b72-3a50-4985-aa59-b15ad0d05c6b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.716 2 DEBUG oslo_concurrency.processutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/430d4b72-3a50-4985-aa59-b15ad0d05c6b/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.717 2 DEBUG nova.virt.disk.api [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Cannot resize image /var/lib/nova/instances/430d4b72-3a50-4985-aa59-b15ad0d05c6b/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.718 2 DEBUG nova.objects.instance [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'migration_context' on Instance uuid 430d4b72-3a50-4985-aa59-b15ad0d05c6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.751 2 DEBUG oslo_concurrency.processutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/430d4b72-3a50-4985-aa59-b15ad0d05c6b/disk.config 485376 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.776 2 DEBUG oslo_concurrency.processutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/430d4b72-3a50-4985-aa59-b15ad0d05c6b/disk.config 485376" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.778 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Copying file compute-0.ctlplane.example.com:/var/lib/nova/instances/430d4b72-3a50-4985-aa59-b15ad0d05c6b/disk.config to /var/lib/nova/instances/430d4b72-3a50-4985-aa59-b15ad0d05c6b copy_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:103#033[00m
Oct 13 11:59:14 np0005485008 nova_compute[192512]: 2025-10-13 15:59:14.779 2 DEBUG oslo_concurrency.processutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/430d4b72-3a50-4985-aa59-b15ad0d05c6b/disk.config /var/lib/nova/instances/430d4b72-3a50-4985-aa59-b15ad0d05c6b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:59:15 np0005485008 nova_compute[192512]: 2025-10-13 15:59:15.203 2 DEBUG oslo_concurrency.processutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/430d4b72-3a50-4985-aa59-b15ad0d05c6b/disk.config /var/lib/nova/instances/430d4b72-3a50-4985-aa59-b15ad0d05c6b" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:59:15 np0005485008 nova_compute[192512]: 2025-10-13 15:59:15.205 2 DEBUG nova.virt.libvirt.driver [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Oct 13 11:59:15 np0005485008 nova_compute[192512]: 2025-10-13 15:59:15.206 2 DEBUG nova.virt.libvirt.vif [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T15:58:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1768466586',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1768466586',id=15,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T15:58:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-2ti9fn5w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T15:58:23Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=430d4b72-3a50-4985-aa59-b15ad0d05c6b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7a2143c2-f397-465e-b973-2be27a892e0c", "address": "fa:16:3e:e0:a6:88", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap7a2143c2-f3", "ovs_interfaceid": "7a2143c2-f397-465e-b973-2be27a892e0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 11:59:15 np0005485008 nova_compute[192512]: 2025-10-13 15:59:15.207 2 DEBUG nova.network.os_vif_util [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converting VIF {"id": "7a2143c2-f397-465e-b973-2be27a892e0c", "address": "fa:16:3e:e0:a6:88", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap7a2143c2-f3", "ovs_interfaceid": "7a2143c2-f397-465e-b973-2be27a892e0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:59:15 np0005485008 nova_compute[192512]: 2025-10-13 15:59:15.208 2 DEBUG nova.network.os_vif_util [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:a6:88,bridge_name='br-int',has_traffic_filtering=True,id=7a2143c2-f397-465e-b973-2be27a892e0c,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a2143c2-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:59:15 np0005485008 nova_compute[192512]: 2025-10-13 15:59:15.208 2 DEBUG os_vif [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:a6:88,bridge_name='br-int',has_traffic_filtering=True,id=7a2143c2-f397-465e-b973-2be27a892e0c,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a2143c2-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 11:59:15 np0005485008 nova_compute[192512]: 2025-10-13 15:59:15.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:15 np0005485008 nova_compute[192512]: 2025-10-13 15:59:15.210 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:59:15 np0005485008 nova_compute[192512]: 2025-10-13 15:59:15.211 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:59:15 np0005485008 nova_compute[192512]: 2025-10-13 15:59:15.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:15 np0005485008 nova_compute[192512]: 2025-10-13 15:59:15.214 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a2143c2-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:59:15 np0005485008 nova_compute[192512]: 2025-10-13 15:59:15.215 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7a2143c2-f3, col_values=(('external_ids', {'iface-id': '7a2143c2-f397-465e-b973-2be27a892e0c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e0:a6:88', 'vm-uuid': '430d4b72-3a50-4985-aa59-b15ad0d05c6b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:59:15 np0005485008 nova_compute[192512]: 2025-10-13 15:59:15.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:15 np0005485008 NetworkManager[51587]: <info>  [1760371155.2180] manager: (tap7a2143c2-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Oct 13 11:59:15 np0005485008 nova_compute[192512]: 2025-10-13 15:59:15.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 11:59:15 np0005485008 nova_compute[192512]: 2025-10-13 15:59:15.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:15 np0005485008 nova_compute[192512]: 2025-10-13 15:59:15.225 2 INFO os_vif [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:a6:88,bridge_name='br-int',has_traffic_filtering=True,id=7a2143c2-f397-465e-b973-2be27a892e0c,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a2143c2-f3')#033[00m
Oct 13 11:59:15 np0005485008 nova_compute[192512]: 2025-10-13 15:59:15.226 2 DEBUG nova.virt.libvirt.driver [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Oct 13 11:59:15 np0005485008 nova_compute[192512]: 2025-10-13 15:59:15.226 2 DEBUG nova.compute.manager [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp42kkvgep',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='430d4b72-3a50-4985-aa59-b15ad0d05c6b',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Oct 13 11:59:15 np0005485008 nova_compute[192512]: 2025-10-13 15:59:15.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:59:16 np0005485008 nova_compute[192512]: 2025-10-13 15:59:16.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:16 np0005485008 nova_compute[192512]: 2025-10-13 15:59:16.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:59:16 np0005485008 nova_compute[192512]: 2025-10-13 15:59:16.886 2 DEBUG nova.network.neutron [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Port 7a2143c2-f397-465e-b973-2be27a892e0c updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Oct 13 11:59:16 np0005485008 nova_compute[192512]: 2025-10-13 15:59:16.889 2 DEBUG nova.compute.manager [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp42kkvgep',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='430d4b72-3a50-4985-aa59-b15ad0d05c6b',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Oct 13 11:59:17 np0005485008 systemd[1]: Starting libvirt proxy daemon...
Oct 13 11:59:17 np0005485008 systemd[1]: Started libvirt proxy daemon.
Oct 13 11:59:17 np0005485008 kernel: tap7a2143c2-f3: entered promiscuous mode
Oct 13 11:59:17 np0005485008 NetworkManager[51587]: <info>  [1760371157.2090] manager: (tap7a2143c2-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Oct 13 11:59:17 np0005485008 ovn_controller[94758]: 2025-10-13T15:59:17Z|00183|binding|INFO|Claiming lport 7a2143c2-f397-465e-b973-2be27a892e0c for this additional chassis.
Oct 13 11:59:17 np0005485008 ovn_controller[94758]: 2025-10-13T15:59:17Z|00184|binding|INFO|7a2143c2-f397-465e-b973-2be27a892e0c: Claiming fa:16:3e:e0:a6:88 10.100.0.11
Oct 13 11:59:17 np0005485008 nova_compute[192512]: 2025-10-13 15:59:17.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:17 np0005485008 ovn_controller[94758]: 2025-10-13T15:59:17Z|00185|binding|INFO|Setting lport 7a2143c2-f397-465e-b973-2be27a892e0c ovn-installed in OVS
Oct 13 11:59:17 np0005485008 nova_compute[192512]: 2025-10-13 15:59:17.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:17 np0005485008 systemd-udevd[221306]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 11:59:17 np0005485008 NetworkManager[51587]: <info>  [1760371157.2565] device (tap7a2143c2-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 11:59:17 np0005485008 NetworkManager[51587]: <info>  [1760371157.2579] device (tap7a2143c2-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 11:59:17 np0005485008 systemd-machined[152551]: New machine qemu-15-instance-0000000f.
Oct 13 11:59:17 np0005485008 systemd[1]: Started Virtual Machine qemu-15-instance-0000000f.
Oct 13 11:59:17 np0005485008 nova_compute[192512]: 2025-10-13 15:59:17.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:59:17 np0005485008 nova_compute[192512]: 2025-10-13 15:59:17.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:59:18 np0005485008 nova_compute[192512]: 2025-10-13 15:59:18.338 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760371158.337044, 430d4b72-3a50-4985-aa59-b15ad0d05c6b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:59:18 np0005485008 nova_compute[192512]: 2025-10-13 15:59:18.339 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] VM Started (Lifecycle Event)#033[00m
Oct 13 11:59:18 np0005485008 nova_compute[192512]: 2025-10-13 15:59:18.456 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:59:19 np0005485008 nova_compute[192512]: 2025-10-13 15:59:19.023 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760371159.0233293, 430d4b72-3a50-4985-aa59-b15ad0d05c6b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:59:19 np0005485008 nova_compute[192512]: 2025-10-13 15:59:19.024 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] VM Resumed (Lifecycle Event)#033[00m
Oct 13 11:59:19 np0005485008 nova_compute[192512]: 2025-10-13 15:59:19.054 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:59:19 np0005485008 nova_compute[192512]: 2025-10-13 15:59:19.057 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 11:59:19 np0005485008 nova_compute[192512]: 2025-10-13 15:59:19.082 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct 13 11:59:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:59:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:59:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:59:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:59:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:59:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:59:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:59:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:59:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:59:19 np0005485008 openstack_network_exporter[205063]: ERROR   15:59:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:59:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:59:19 np0005485008 nova_compute[192512]: 2025-10-13 15:59:19.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:59:20 np0005485008 ovn_controller[94758]: 2025-10-13T15:59:20Z|00186|binding|INFO|Claiming lport 7a2143c2-f397-465e-b973-2be27a892e0c for this chassis.
Oct 13 11:59:20 np0005485008 ovn_controller[94758]: 2025-10-13T15:59:20Z|00187|binding|INFO|7a2143c2-f397-465e-b973-2be27a892e0c: Claiming fa:16:3e:e0:a6:88 10.100.0.11
Oct 13 11:59:20 np0005485008 ovn_controller[94758]: 2025-10-13T15:59:20Z|00188|binding|INFO|Setting lport 7a2143c2-f397-465e-b973-2be27a892e0c up in Southbound
Oct 13 11:59:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:20.138 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:a6:88 10.100.0.11'], port_security=['fa:16:3e:e0:a6:88 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '430d4b72-3a50-4985-aa59-b15ad0d05c6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=7a2143c2-f397-465e-b973-2be27a892e0c) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:59:20 np0005485008 nova_compute[192512]: 2025-10-13 15:59:20.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:20.143 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 7a2143c2-f397-465e-b973-2be27a892e0c in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae bound to our chassis#033[00m
Oct 13 11:59:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:20.145 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:59:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:20.148 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae#033[00m
Oct 13 11:59:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:20.167 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[5fdf57e4-ef28-415f-9b3d-a5e17b5d045e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:59:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:20.201 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[50838ddc-d49c-412d-bb99-91a58d8b326c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:59:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:20.204 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[4fe1ec02-577c-441c-8066-532e8262142d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:59:20 np0005485008 nova_compute[192512]: 2025-10-13 15:59:20.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:20.236 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[a455cd45-cc52-4b23-b590-16537481cc22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:59:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:20.257 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[c7fe2e39-af33-4cbd-a460-c4a141f5f93f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 15, 'tx_packets': 5, 'rx_bytes': 1126, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 15, 'tx_packets': 5, 'rx_bytes': 1126, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465708, 'reachable_time': 35283, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 221343, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:59:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:20.274 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[6a7ab860-b7ff-4f5b-b324-3e5f41a7c47c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465721, 'tstamp': 465721}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 221344, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465724, 'tstamp': 465724}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 221344, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:59:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:20.276 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:59:20 np0005485008 nova_compute[192512]: 2025-10-13 15:59:20.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:20.279 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39a43da9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:59:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:20.279 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:59:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:20.279 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39a43da9-c0, col_values=(('external_ids', {'iface-id': '5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:59:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:20.280 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:59:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:20.281 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 11:59:20 np0005485008 nova_compute[192512]: 2025-10-13 15:59:20.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:59:20 np0005485008 nova_compute[192512]: 2025-10-13 15:59:20.535 2 INFO nova.compute.manager [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Post operation of migration started#033[00m
Oct 13 11:59:20 np0005485008 nova_compute[192512]: 2025-10-13 15:59:20.880 2 DEBUG oslo_concurrency.lockutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-430d4b72-3a50-4985-aa59-b15ad0d05c6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:59:20 np0005485008 nova_compute[192512]: 2025-10-13 15:59:20.881 2 DEBUG oslo_concurrency.lockutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-430d4b72-3a50-4985-aa59-b15ad0d05c6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:59:20 np0005485008 nova_compute[192512]: 2025-10-13 15:59:20.881 2 DEBUG nova.network.neutron [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 11:59:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:21.284 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:59:21 np0005485008 nova_compute[192512]: 2025-10-13 15:59:21.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:22 np0005485008 nova_compute[192512]: 2025-10-13 15:59:22.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:59:22 np0005485008 nova_compute[192512]: 2025-10-13 15:59:22.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 11:59:22 np0005485008 nova_compute[192512]: 2025-10-13 15:59:22.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 11:59:22 np0005485008 nova_compute[192512]: 2025-10-13 15:59:22.915 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "refresh_cache-1099298b-0725-435d-ae44-ced74a5c30ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 11:59:22 np0005485008 nova_compute[192512]: 2025-10-13 15:59:22.915 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquired lock "refresh_cache-1099298b-0725-435d-ae44-ced74a5c30ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 11:59:22 np0005485008 nova_compute[192512]: 2025-10-13 15:59:22.916 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 13 11:59:22 np0005485008 nova_compute[192512]: 2025-10-13 15:59:22.916 2 DEBUG nova.objects.instance [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1099298b-0725-435d-ae44-ced74a5c30ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:59:23 np0005485008 podman[221347]: 2025-10-13 15:59:23.762909197 +0000 UTC m=+0.057066905 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true)
Oct 13 11:59:23 np0005485008 podman[221348]: 2025-10-13 15:59:23.763241388 +0000 UTC m=+0.052871844 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 11:59:23 np0005485008 podman[221346]: 2025-10-13 15:59:23.768458761 +0000 UTC m=+0.064220029 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:59:23 np0005485008 podman[221345]: 2025-10-13 15:59:23.774419467 +0000 UTC m=+0.072090335 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 11:59:23 np0005485008 podman[221349]: 2025-10-13 15:59:23.806337535 +0000 UTC m=+0.091662007 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:59:24 np0005485008 nova_compute[192512]: 2025-10-13 15:59:24.947 2 DEBUG nova.network.neutron [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Updating instance_info_cache with network_info: [{"id": "7a2143c2-f397-465e-b973-2be27a892e0c", "address": "fa:16:3e:e0:a6:88", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a2143c2-f3", "ovs_interfaceid": "7a2143c2-f397-465e-b973-2be27a892e0c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:59:25 np0005485008 nova_compute[192512]: 2025-10-13 15:59:25.034 2 DEBUG oslo_concurrency.lockutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-430d4b72-3a50-4985-aa59-b15ad0d05c6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:59:25 np0005485008 nova_compute[192512]: 2025-10-13 15:59:25.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:25 np0005485008 nova_compute[192512]: 2025-10-13 15:59:25.305 2 DEBUG oslo_concurrency.lockutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:59:25 np0005485008 nova_compute[192512]: 2025-10-13 15:59:25.306 2 DEBUG oslo_concurrency.lockutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:59:25 np0005485008 nova_compute[192512]: 2025-10-13 15:59:25.306 2 DEBUG oslo_concurrency.lockutils [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:59:25 np0005485008 nova_compute[192512]: 2025-10-13 15:59:25.311 2 INFO nova.virt.libvirt.driver [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Oct 13 11:59:25 np0005485008 virtqemud[192082]: Domain id=15 name='instance-0000000f' uuid=430d4b72-3a50-4985-aa59-b15ad0d05c6b is tainted: custom-monitor
Oct 13 11:59:26 np0005485008 nova_compute[192512]: 2025-10-13 15:59:26.318 2 INFO nova.virt.libvirt.driver [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Oct 13 11:59:26 np0005485008 nova_compute[192512]: 2025-10-13 15:59:26.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:27 np0005485008 nova_compute[192512]: 2025-10-13 15:59:27.325 2 INFO nova.virt.libvirt.driver [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Oct 13 11:59:27 np0005485008 nova_compute[192512]: 2025-10-13 15:59:27.331 2 DEBUG nova.compute.manager [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:59:27 np0005485008 nova_compute[192512]: 2025-10-13 15:59:27.562 2 DEBUG nova.objects.instance [None req-eb5afaa4-9dbc-48c3-9be8-ad0ac6396f84 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 13 11:59:27 np0005485008 nova_compute[192512]: 2025-10-13 15:59:27.673 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Updating instance_info_cache with network_info: [{"id": "06e48782-902b-456b-b7f1-ce2d72d27357", "address": "fa:16:3e:11:bc:1d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e48782-90", "ovs_interfaceid": "06e48782-902b-456b-b7f1-ce2d72d27357", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:59:28 np0005485008 nova_compute[192512]: 2025-10-13 15:59:28.203 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Releasing lock "refresh_cache-1099298b-0725-435d-ae44-ced74a5c30ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 11:59:28 np0005485008 nova_compute[192512]: 2025-10-13 15:59:28.204 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 13 11:59:28 np0005485008 nova_compute[192512]: 2025-10-13 15:59:28.204 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:59:28 np0005485008 nova_compute[192512]: 2025-10-13 15:59:28.469 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:59:28 np0005485008 nova_compute[192512]: 2025-10-13 15:59:28.469 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:59:28 np0005485008 nova_compute[192512]: 2025-10-13 15:59:28.470 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:59:28 np0005485008 nova_compute[192512]: 2025-10-13 15:59:28.470 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 11:59:29 np0005485008 nova_compute[192512]: 2025-10-13 15:59:29.205 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/430d4b72-3a50-4985-aa59-b15ad0d05c6b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:59:29 np0005485008 nova_compute[192512]: 2025-10-13 15:59:29.264 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/430d4b72-3a50-4985-aa59-b15ad0d05c6b/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:59:29 np0005485008 nova_compute[192512]: 2025-10-13 15:59:29.266 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/430d4b72-3a50-4985-aa59-b15ad0d05c6b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:59:29 np0005485008 nova_compute[192512]: 2025-10-13 15:59:29.328 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/430d4b72-3a50-4985-aa59-b15ad0d05c6b/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:59:29 np0005485008 nova_compute[192512]: 2025-10-13 15:59:29.335 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1099298b-0725-435d-ae44-ced74a5c30ef/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:59:29 np0005485008 nova_compute[192512]: 2025-10-13 15:59:29.396 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1099298b-0725-435d-ae44-ced74a5c30ef/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:59:29 np0005485008 nova_compute[192512]: 2025-10-13 15:59:29.397 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1099298b-0725-435d-ae44-ced74a5c30ef/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 11:59:29 np0005485008 nova_compute[192512]: 2025-10-13 15:59:29.460 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1099298b-0725-435d-ae44-ced74a5c30ef/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 11:59:29 np0005485008 nova_compute[192512]: 2025-10-13 15:59:29.611 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 11:59:29 np0005485008 nova_compute[192512]: 2025-10-13 15:59:29.612 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5523MB free_disk=73.4080696105957GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 11:59:29 np0005485008 nova_compute[192512]: 2025-10-13 15:59:29.612 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:59:29 np0005485008 nova_compute[192512]: 2025-10-13 15:59:29.613 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:59:30 np0005485008 nova_compute[192512]: 2025-10-13 15:59:30.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:30 np0005485008 nova_compute[192512]: 2025-10-13 15:59:30.284 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Applying migration context for instance 430d4b72-3a50-4985-aa59-b15ad0d05c6b as it has an incoming, in-progress migration b7138161-6dea-4796-8c50-d687b12da37e. Migration status is running _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950#033[00m
Oct 13 11:59:30 np0005485008 nova_compute[192512]: 2025-10-13 15:59:30.285 2 DEBUG nova.objects.instance [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 13 11:59:30 np0005485008 nova_compute[192512]: 2025-10-13 15:59:30.679 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Oct 13 11:59:30 np0005485008 nova_compute[192512]: 2025-10-13 15:59:30.713 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance 1099298b-0725-435d-ae44-ced74a5c30ef actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 13 11:59:30 np0005485008 nova_compute[192512]: 2025-10-13 15:59:30.713 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance 430d4b72-3a50-4985-aa59-b15ad0d05c6b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 13 11:59:30 np0005485008 nova_compute[192512]: 2025-10-13 15:59:30.714 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 11:59:30 np0005485008 nova_compute[192512]: 2025-10-13 15:59:30.714 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 11:59:30 np0005485008 nova_compute[192512]: 2025-10-13 15:59:30.790 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:59:31 np0005485008 nova_compute[192512]: 2025-10-13 15:59:31.052 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:59:31 np0005485008 nova_compute[192512]: 2025-10-13 15:59:31.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:31 np0005485008 nova_compute[192512]: 2025-10-13 15:59:31.925 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 11:59:31 np0005485008 nova_compute[192512]: 2025-10-13 15:59:31.926 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.313s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:59:33 np0005485008 nova_compute[192512]: 2025-10-13 15:59:33.924 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 11:59:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:33.963 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:59:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:33.964 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:59:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:33.965 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:59:35 np0005485008 nova_compute[192512]: 2025-10-13 15:59:35.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:35 np0005485008 podman[202884]: time="2025-10-13T15:59:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 11:59:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:59:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 11:59:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:15:59:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3466 "" "Go-http-client/1.1"
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.512 2 DEBUG oslo_concurrency.lockutils [None req-bbca2176-1899-49e6-9f8d-68034a2ee8f1 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "1099298b-0725-435d-ae44-ced74a5c30ef" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.513 2 DEBUG oslo_concurrency.lockutils [None req-bbca2176-1899-49e6-9f8d-68034a2ee8f1 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "1099298b-0725-435d-ae44-ced74a5c30ef" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.514 2 DEBUG oslo_concurrency.lockutils [None req-bbca2176-1899-49e6-9f8d-68034a2ee8f1 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "1099298b-0725-435d-ae44-ced74a5c30ef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.514 2 DEBUG oslo_concurrency.lockutils [None req-bbca2176-1899-49e6-9f8d-68034a2ee8f1 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "1099298b-0725-435d-ae44-ced74a5c30ef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.514 2 DEBUG oslo_concurrency.lockutils [None req-bbca2176-1899-49e6-9f8d-68034a2ee8f1 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "1099298b-0725-435d-ae44-ced74a5c30ef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.515 2 INFO nova.compute.manager [None req-bbca2176-1899-49e6-9f8d-68034a2ee8f1 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Terminating instance#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.516 2 DEBUG nova.compute.manager [None req-bbca2176-1899-49e6-9f8d-68034a2ee8f1 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 13 11:59:36 np0005485008 kernel: tap06e48782-90 (unregistering): left promiscuous mode
Oct 13 11:59:36 np0005485008 NetworkManager[51587]: <info>  [1760371176.5419] device (tap06e48782-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 11:59:36 np0005485008 ovn_controller[94758]: 2025-10-13T15:59:36Z|00189|binding|INFO|Releasing lport 06e48782-902b-456b-b7f1-ce2d72d27357 from this chassis (sb_readonly=0)
Oct 13 11:59:36 np0005485008 ovn_controller[94758]: 2025-10-13T15:59:36Z|00190|binding|INFO|Setting lport 06e48782-902b-456b-b7f1-ce2d72d27357 down in Southbound
Oct 13 11:59:36 np0005485008 ovn_controller[94758]: 2025-10-13T15:59:36Z|00191|binding|INFO|Removing iface tap06e48782-90 ovn-installed in OVS
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:36.568 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:bc:1d 10.100.0.14'], port_security=['fa:16:3e:11:bc:1d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1099298b-0725-435d-ae44-ced74a5c30ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=06e48782-902b-456b-b7f1-ce2d72d27357) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:59:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:36.570 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 06e48782-902b-456b-b7f1-ce2d72d27357 in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae unbound from our chassis#033[00m
Oct 13 11:59:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:36.571 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae#033[00m
Oct 13 11:59:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:36.590 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[5af0bbb6-ab8a-49c8-876c-f8a4a519fc26]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:59:36 np0005485008 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d00000010.scope: Deactivated successfully.
Oct 13 11:59:36 np0005485008 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d00000010.scope: Consumed 14.756s CPU time.
Oct 13 11:59:36 np0005485008 systemd-machined[152551]: Machine qemu-14-instance-00000010 terminated.
Oct 13 11:59:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:36.622 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[93034fdb-00fa-4310-849c-0d0a4dd7e756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:59:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:36.626 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[13ed5666-44f0-403f-aa76-d6ef8515060f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:59:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:36.653 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[788183d8-2d33-405f-8cb8-635b4da2ead6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:59:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:36.671 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[ee317b31-b843-4207-8d10-6e9b0b784d79]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465708, 'reachable_time': 44114, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 221472, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:59:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:36.693 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[c8a8ca5e-de13-439a-ade2-55629bff0912]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465721, 'tstamp': 465721}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 221473, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465724, 'tstamp': 465724}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 221473, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:59:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:36.695 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:36.703 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39a43da9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:59:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:36.704 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:59:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:36.704 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39a43da9-c0, col_values=(('external_ids', {'iface-id': '5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:36.704 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.781 2 INFO nova.virt.libvirt.driver [-] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Instance destroyed successfully.#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.782 2 DEBUG nova.objects.instance [None req-bbca2176-1899-49e6-9f8d-68034a2ee8f1 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lazy-loading 'resources' on Instance uuid 1099298b-0725-435d-ae44-ced74a5c30ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.804 2 DEBUG nova.virt.libvirt.vif [None req-bbca2176-1899-49e6-9f8d-68034a2ee8f1 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T15:58:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1334267621',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1334267621',id=16,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T15:58:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-ssfj49th',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T15:58:40Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=1099298b-0725-435d-ae44-ced74a5c30ef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "06e48782-902b-456b-b7f1-ce2d72d27357", "address": "fa:16:3e:11:bc:1d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e48782-90", "ovs_interfaceid": "06e48782-902b-456b-b7f1-ce2d72d27357", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.804 2 DEBUG nova.network.os_vif_util [None req-bbca2176-1899-49e6-9f8d-68034a2ee8f1 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converting VIF {"id": "06e48782-902b-456b-b7f1-ce2d72d27357", "address": "fa:16:3e:11:bc:1d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06e48782-90", "ovs_interfaceid": "06e48782-902b-456b-b7f1-ce2d72d27357", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.805 2 DEBUG nova.network.os_vif_util [None req-bbca2176-1899-49e6-9f8d-68034a2ee8f1 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:11:bc:1d,bridge_name='br-int',has_traffic_filtering=True,id=06e48782-902b-456b-b7f1-ce2d72d27357,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06e48782-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.805 2 DEBUG os_vif [None req-bbca2176-1899-49e6-9f8d-68034a2ee8f1 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:11:bc:1d,bridge_name='br-int',has_traffic_filtering=True,id=06e48782-902b-456b-b7f1-ce2d72d27357,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06e48782-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.808 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06e48782-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.865 2 INFO os_vif [None req-bbca2176-1899-49e6-9f8d-68034a2ee8f1 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:11:bc:1d,bridge_name='br-int',has_traffic_filtering=True,id=06e48782-902b-456b-b7f1-ce2d72d27357,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06e48782-90')#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.866 2 INFO nova.virt.libvirt.driver [None req-bbca2176-1899-49e6-9f8d-68034a2ee8f1 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Deleting instance files /var/lib/nova/instances/1099298b-0725-435d-ae44-ced74a5c30ef_del#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.866 2 INFO nova.virt.libvirt.driver [None req-bbca2176-1899-49e6-9f8d-68034a2ee8f1 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Deletion of /var/lib/nova/instances/1099298b-0725-435d-ae44-ced74a5c30ef_del complete#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.923 2 INFO nova.compute.manager [None req-bbca2176-1899-49e6-9f8d-68034a2ee8f1 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Took 0.41 seconds to destroy the instance on the hypervisor.#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.924 2 DEBUG oslo.service.loopingcall [None req-bbca2176-1899-49e6-9f8d-68034a2ee8f1 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.924 2 DEBUG nova.compute.manager [-] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 13 11:59:36 np0005485008 nova_compute[192512]: 2025-10-13 15:59:36.924 2 DEBUG nova.network.neutron [-] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 13 11:59:37 np0005485008 nova_compute[192512]: 2025-10-13 15:59:37.140 2 DEBUG nova.compute.manager [req-64234677-8cee-463a-93bc-b8fd7eec1834 req-ce5b5dc3-6515-4e90-a690-689c648fed1b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Received event network-vif-unplugged-06e48782-902b-456b-b7f1-ce2d72d27357 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:59:37 np0005485008 nova_compute[192512]: 2025-10-13 15:59:37.141 2 DEBUG oslo_concurrency.lockutils [req-64234677-8cee-463a-93bc-b8fd7eec1834 req-ce5b5dc3-6515-4e90-a690-689c648fed1b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "1099298b-0725-435d-ae44-ced74a5c30ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:59:37 np0005485008 nova_compute[192512]: 2025-10-13 15:59:37.141 2 DEBUG oslo_concurrency.lockutils [req-64234677-8cee-463a-93bc-b8fd7eec1834 req-ce5b5dc3-6515-4e90-a690-689c648fed1b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "1099298b-0725-435d-ae44-ced74a5c30ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:59:37 np0005485008 nova_compute[192512]: 2025-10-13 15:59:37.142 2 DEBUG oslo_concurrency.lockutils [req-64234677-8cee-463a-93bc-b8fd7eec1834 req-ce5b5dc3-6515-4e90-a690-689c648fed1b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "1099298b-0725-435d-ae44-ced74a5c30ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:59:37 np0005485008 nova_compute[192512]: 2025-10-13 15:59:37.142 2 DEBUG nova.compute.manager [req-64234677-8cee-463a-93bc-b8fd7eec1834 req-ce5b5dc3-6515-4e90-a690-689c648fed1b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] No waiting events found dispatching network-vif-unplugged-06e48782-902b-456b-b7f1-ce2d72d27357 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:59:37 np0005485008 nova_compute[192512]: 2025-10-13 15:59:37.143 2 DEBUG nova.compute.manager [req-64234677-8cee-463a-93bc-b8fd7eec1834 req-ce5b5dc3-6515-4e90-a690-689c648fed1b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Received event network-vif-unplugged-06e48782-902b-456b-b7f1-ce2d72d27357 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 11:59:37 np0005485008 podman[221492]: 2025-10-13 15:59:37.760600821 +0000 UTC m=+0.060027077 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, distribution-scope=public, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.7)
Oct 13 11:59:38 np0005485008 nova_compute[192512]: 2025-10-13 15:59:38.101 2 DEBUG nova.network.neutron [-] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:59:38 np0005485008 nova_compute[192512]: 2025-10-13 15:59:38.122 2 INFO nova.compute.manager [-] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Took 1.20 seconds to deallocate network for instance.#033[00m
Oct 13 11:59:38 np0005485008 nova_compute[192512]: 2025-10-13 15:59:38.162 2 DEBUG oslo_concurrency.lockutils [None req-bbca2176-1899-49e6-9f8d-68034a2ee8f1 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:59:38 np0005485008 nova_compute[192512]: 2025-10-13 15:59:38.162 2 DEBUG oslo_concurrency.lockutils [None req-bbca2176-1899-49e6-9f8d-68034a2ee8f1 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:59:38 np0005485008 nova_compute[192512]: 2025-10-13 15:59:38.240 2 DEBUG nova.compute.provider_tree [None req-bbca2176-1899-49e6-9f8d-68034a2ee8f1 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:59:38 np0005485008 nova_compute[192512]: 2025-10-13 15:59:38.261 2 DEBUG nova.scheduler.client.report [None req-bbca2176-1899-49e6-9f8d-68034a2ee8f1 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:59:38 np0005485008 nova_compute[192512]: 2025-10-13 15:59:38.300 2 DEBUG oslo_concurrency.lockutils [None req-bbca2176-1899-49e6-9f8d-68034a2ee8f1 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:59:38 np0005485008 nova_compute[192512]: 2025-10-13 15:59:38.330 2 INFO nova.scheduler.client.report [None req-bbca2176-1899-49e6-9f8d-68034a2ee8f1 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Deleted allocations for instance 1099298b-0725-435d-ae44-ced74a5c30ef#033[00m
Oct 13 11:59:38 np0005485008 nova_compute[192512]: 2025-10-13 15:59:38.411 2 DEBUG oslo_concurrency.lockutils [None req-bbca2176-1899-49e6-9f8d-68034a2ee8f1 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "1099298b-0725-435d-ae44-ced74a5c30ef" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:59:39 np0005485008 nova_compute[192512]: 2025-10-13 15:59:39.311 2 DEBUG nova.compute.manager [req-bc6e21db-f7bc-4a15-b699-46f71ad24f28 req-2145c373-99bb-44b1-b0e7-c4b1ff2ff799 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Received event network-vif-plugged-06e48782-902b-456b-b7f1-ce2d72d27357 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:59:39 np0005485008 nova_compute[192512]: 2025-10-13 15:59:39.311 2 DEBUG oslo_concurrency.lockutils [req-bc6e21db-f7bc-4a15-b699-46f71ad24f28 req-2145c373-99bb-44b1-b0e7-c4b1ff2ff799 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "1099298b-0725-435d-ae44-ced74a5c30ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:59:39 np0005485008 nova_compute[192512]: 2025-10-13 15:59:39.312 2 DEBUG oslo_concurrency.lockutils [req-bc6e21db-f7bc-4a15-b699-46f71ad24f28 req-2145c373-99bb-44b1-b0e7-c4b1ff2ff799 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "1099298b-0725-435d-ae44-ced74a5c30ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:59:39 np0005485008 nova_compute[192512]: 2025-10-13 15:59:39.312 2 DEBUG oslo_concurrency.lockutils [req-bc6e21db-f7bc-4a15-b699-46f71ad24f28 req-2145c373-99bb-44b1-b0e7-c4b1ff2ff799 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "1099298b-0725-435d-ae44-ced74a5c30ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:59:39 np0005485008 nova_compute[192512]: 2025-10-13 15:59:39.312 2 DEBUG nova.compute.manager [req-bc6e21db-f7bc-4a15-b699-46f71ad24f28 req-2145c373-99bb-44b1-b0e7-c4b1ff2ff799 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] No waiting events found dispatching network-vif-plugged-06e48782-902b-456b-b7f1-ce2d72d27357 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:59:39 np0005485008 nova_compute[192512]: 2025-10-13 15:59:39.313 2 WARNING nova.compute.manager [req-bc6e21db-f7bc-4a15-b699-46f71ad24f28 req-2145c373-99bb-44b1-b0e7-c4b1ff2ff799 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Received unexpected event network-vif-plugged-06e48782-902b-456b-b7f1-ce2d72d27357 for instance with vm_state deleted and task_state None.#033[00m
Oct 13 11:59:39 np0005485008 nova_compute[192512]: 2025-10-13 15:59:39.313 2 DEBUG nova.compute.manager [req-bc6e21db-f7bc-4a15-b699-46f71ad24f28 req-2145c373-99bb-44b1-b0e7-c4b1ff2ff799 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Received event network-vif-deleted-06e48782-902b-456b-b7f1-ce2d72d27357 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:59:39 np0005485008 nova_compute[192512]: 2025-10-13 15:59:39.793 2 DEBUG oslo_concurrency.lockutils [None req-c94b7674-fcac-4846-a576-86d656f0ea0a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "430d4b72-3a50-4985-aa59-b15ad0d05c6b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:59:39 np0005485008 nova_compute[192512]: 2025-10-13 15:59:39.794 2 DEBUG oslo_concurrency.lockutils [None req-c94b7674-fcac-4846-a576-86d656f0ea0a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "430d4b72-3a50-4985-aa59-b15ad0d05c6b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:59:39 np0005485008 nova_compute[192512]: 2025-10-13 15:59:39.794 2 DEBUG oslo_concurrency.lockutils [None req-c94b7674-fcac-4846-a576-86d656f0ea0a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "430d4b72-3a50-4985-aa59-b15ad0d05c6b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:59:39 np0005485008 nova_compute[192512]: 2025-10-13 15:59:39.794 2 DEBUG oslo_concurrency.lockutils [None req-c94b7674-fcac-4846-a576-86d656f0ea0a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "430d4b72-3a50-4985-aa59-b15ad0d05c6b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:59:39 np0005485008 nova_compute[192512]: 2025-10-13 15:59:39.795 2 DEBUG oslo_concurrency.lockutils [None req-c94b7674-fcac-4846-a576-86d656f0ea0a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "430d4b72-3a50-4985-aa59-b15ad0d05c6b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:59:39 np0005485008 nova_compute[192512]: 2025-10-13 15:59:39.796 2 INFO nova.compute.manager [None req-c94b7674-fcac-4846-a576-86d656f0ea0a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Terminating instance#033[00m
Oct 13 11:59:39 np0005485008 nova_compute[192512]: 2025-10-13 15:59:39.797 2 DEBUG nova.compute.manager [None req-c94b7674-fcac-4846-a576-86d656f0ea0a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 13 11:59:39 np0005485008 kernel: tap7a2143c2-f3 (unregistering): left promiscuous mode
Oct 13 11:59:39 np0005485008 NetworkManager[51587]: <info>  [1760371179.8245] device (tap7a2143c2-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 11:59:39 np0005485008 ovn_controller[94758]: 2025-10-13T15:59:39Z|00192|binding|INFO|Releasing lport 7a2143c2-f397-465e-b973-2be27a892e0c from this chassis (sb_readonly=0)
Oct 13 11:59:39 np0005485008 ovn_controller[94758]: 2025-10-13T15:59:39Z|00193|binding|INFO|Setting lport 7a2143c2-f397-465e-b973-2be27a892e0c down in Southbound
Oct 13 11:59:39 np0005485008 ovn_controller[94758]: 2025-10-13T15:59:39Z|00194|binding|INFO|Removing iface tap7a2143c2-f3 ovn-installed in OVS
Oct 13 11:59:39 np0005485008 nova_compute[192512]: 2025-10-13 15:59:39.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:39 np0005485008 nova_compute[192512]: 2025-10-13 15:59:39.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:39.838 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:a6:88 10.100.0.11'], port_security=['fa:16:3e:e0:a6:88 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '430d4b72-3a50-4985-aa59-b15ad0d05c6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=7a2143c2-f397-465e-b973-2be27a892e0c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 11:59:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:39.839 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 7a2143c2-f397-465e-b973-2be27a892e0c in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae unbound from our chassis#033[00m
Oct 13 11:59:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:39.840 103642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 13 11:59:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:39.841 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[a1ed06c4-64b2-4e8c-be8a-dc4f88c1d1b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:59:39 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:39.842 103642 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae namespace which is not needed anymore#033[00m
Oct 13 11:59:39 np0005485008 nova_compute[192512]: 2025-10-13 15:59:39.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:39 np0005485008 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Oct 13 11:59:39 np0005485008 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Consumed 2.224s CPU time.
Oct 13 11:59:39 np0005485008 systemd-machined[152551]: Machine qemu-15-instance-0000000f terminated.
Oct 13 11:59:39 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[221099]: [NOTICE]   (221106) : haproxy version is 2.8.14-c23fe91
Oct 13 11:59:39 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[221099]: [NOTICE]   (221106) : path to executable is /usr/sbin/haproxy
Oct 13 11:59:39 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[221099]: [WARNING]  (221106) : Exiting Master process...
Oct 13 11:59:39 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[221099]: [WARNING]  (221106) : Exiting Master process...
Oct 13 11:59:39 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[221099]: [ALERT]    (221106) : Current worker (221111) exited with code 143 (Terminated)
Oct 13 11:59:39 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[221099]: [WARNING]  (221106) : All workers exited. Exiting... (0)
Oct 13 11:59:39 np0005485008 systemd[1]: libpod-3f704fc26de85dc79a9cd93502fd2af7b0921fb4e7bd984f4b1408f1857f4380.scope: Deactivated successfully.
Oct 13 11:59:39 np0005485008 podman[221538]: 2025-10-13 15:59:39.973422184 +0000 UTC m=+0.041661239 container died 3f704fc26de85dc79a9cd93502fd2af7b0921fb4e7bd984f4b1408f1857f4380 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 11:59:39 np0005485008 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3f704fc26de85dc79a9cd93502fd2af7b0921fb4e7bd984f4b1408f1857f4380-userdata-shm.mount: Deactivated successfully.
Oct 13 11:59:39 np0005485008 systemd[1]: var-lib-containers-storage-overlay-3f7163578d9232f3f54f0eba23ed6abde77f1a49563696e1e7c91bd054b9608a-merged.mount: Deactivated successfully.
Oct 13 11:59:40 np0005485008 podman[221538]: 2025-10-13 15:59:40.007613738 +0000 UTC m=+0.075852803 container cleanup 3f704fc26de85dc79a9cd93502fd2af7b0921fb4e7bd984f4b1408f1857f4380 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 11:59:40 np0005485008 systemd[1]: libpod-conmon-3f704fc26de85dc79a9cd93502fd2af7b0921fb4e7bd984f4b1408f1857f4380.scope: Deactivated successfully.
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.056 2 DEBUG nova.compute.manager [req-72aa2552-600a-46e3-9a08-69fb3da6ec7f req-d423aba0-416b-4d0d-b8f9-266e35f9e744 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Received event network-vif-unplugged-7a2143c2-f397-465e-b973-2be27a892e0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.057 2 DEBUG oslo_concurrency.lockutils [req-72aa2552-600a-46e3-9a08-69fb3da6ec7f req-d423aba0-416b-4d0d-b8f9-266e35f9e744 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "430d4b72-3a50-4985-aa59-b15ad0d05c6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.057 2 DEBUG oslo_concurrency.lockutils [req-72aa2552-600a-46e3-9a08-69fb3da6ec7f req-d423aba0-416b-4d0d-b8f9-266e35f9e744 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "430d4b72-3a50-4985-aa59-b15ad0d05c6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.058 2 DEBUG oslo_concurrency.lockutils [req-72aa2552-600a-46e3-9a08-69fb3da6ec7f req-d423aba0-416b-4d0d-b8f9-266e35f9e744 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "430d4b72-3a50-4985-aa59-b15ad0d05c6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.058 2 DEBUG nova.compute.manager [req-72aa2552-600a-46e3-9a08-69fb3da6ec7f req-d423aba0-416b-4d0d-b8f9-266e35f9e744 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] No waiting events found dispatching network-vif-unplugged-7a2143c2-f397-465e-b973-2be27a892e0c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.058 2 DEBUG nova.compute.manager [req-72aa2552-600a-46e3-9a08-69fb3da6ec7f req-d423aba0-416b-4d0d-b8f9-266e35f9e744 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Received event network-vif-unplugged-7a2143c2-f397-465e-b973-2be27a892e0c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.061 2 INFO nova.virt.libvirt.driver [-] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Instance destroyed successfully.#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.061 2 DEBUG nova.objects.instance [None req-c94b7674-fcac-4846-a576-86d656f0ea0a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lazy-loading 'resources' on Instance uuid 430d4b72-3a50-4985-aa59-b15ad0d05c6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 11:59:40 np0005485008 podman[221572]: 2025-10-13 15:59:40.067432898 +0000 UTC m=+0.037270992 container remove 3f704fc26de85dc79a9cd93502fd2af7b0921fb4e7bd984f4b1408f1857f4380 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 11:59:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:40.072 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[06408264-ef8e-4594-b2fb-0ef75f87f3b7]: (4, ('Mon Oct 13 03:59:39 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae (3f704fc26de85dc79a9cd93502fd2af7b0921fb4e7bd984f4b1408f1857f4380)\n3f704fc26de85dc79a9cd93502fd2af7b0921fb4e7bd984f4b1408f1857f4380\nMon Oct 13 03:59:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae (3f704fc26de85dc79a9cd93502fd2af7b0921fb4e7bd984f4b1408f1857f4380)\n3f704fc26de85dc79a9cd93502fd2af7b0921fb4e7bd984f4b1408f1857f4380\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:59:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:40.073 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf19cc4-0457-44ad-af50-771588f8a58a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:59:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:40.074 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:40 np0005485008 kernel: tap39a43da9-c0: left promiscuous mode
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.092 2 DEBUG nova.virt.libvirt.vif [None req-c94b7674-fcac-4846-a576-86d656f0ea0a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-13T15:58:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1768466586',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1768466586',id=15,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T15:58:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-2ti9fn5w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',clean_attempts='1',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T15:59:28Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=430d4b72-3a50-4985-aa59-b15ad0d05c6b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7a2143c2-f397-465e-b973-2be27a892e0c", "address": "fa:16:3e:e0:a6:88", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a2143c2-f3", "ovs_interfaceid": "7a2143c2-f397-465e-b973-2be27a892e0c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.092 2 DEBUG nova.network.os_vif_util [None req-c94b7674-fcac-4846-a576-86d656f0ea0a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converting VIF {"id": "7a2143c2-f397-465e-b973-2be27a892e0c", "address": "fa:16:3e:e0:a6:88", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a2143c2-f3", "ovs_interfaceid": "7a2143c2-f397-465e-b973-2be27a892e0c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.093 2 DEBUG nova.network.os_vif_util [None req-c94b7674-fcac-4846-a576-86d656f0ea0a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e0:a6:88,bridge_name='br-int',has_traffic_filtering=True,id=7a2143c2-f397-465e-b973-2be27a892e0c,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a2143c2-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.093 2 DEBUG os_vif [None req-c94b7674-fcac-4846-a576-86d656f0ea0a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e0:a6:88,bridge_name='br-int',has_traffic_filtering=True,id=7a2143c2-f397-465e-b973-2be27a892e0c,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a2143c2-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 11:59:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:40.095 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[5488645e-4e35-4879-b6b9-c8eebbb5cf1f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.096 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a2143c2-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.103 2 INFO os_vif [None req-c94b7674-fcac-4846-a576-86d656f0ea0a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e0:a6:88,bridge_name='br-int',has_traffic_filtering=True,id=7a2143c2-f397-465e-b973-2be27a892e0c,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7a2143c2-f3')#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.104 2 INFO nova.virt.libvirt.driver [None req-c94b7674-fcac-4846-a576-86d656f0ea0a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Deleting instance files /var/lib/nova/instances/430d4b72-3a50-4985-aa59-b15ad0d05c6b_del#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.105 2 INFO nova.virt.libvirt.driver [None req-c94b7674-fcac-4846-a576-86d656f0ea0a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Deletion of /var/lib/nova/instances/430d4b72-3a50-4985-aa59-b15ad0d05c6b_del complete#033[00m
Oct 13 11:59:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:40.120 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[b7ff5614-a34d-4da6-b3fe-31eb04f047c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:59:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:40.121 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[93d37ee3-8d6a-478c-93e7-3b2eac72c00d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:59:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:40.135 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[c4006d93-f8e0-4f37-89db-10d3b847113f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465700, 'reachable_time': 17715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 221600, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:59:40 np0005485008 systemd[1]: run-netns-ovnmeta\x2d39a43da9\x2dcf4c\x2d4fe3\x2dab73\x2dbf8705320dae.mount: Deactivated successfully.
Oct 13 11:59:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:40.141 103757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 13 11:59:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 15:59:40.141 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[4819ed2b-5096-45e7-b508-066321ceb04c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.157 2 INFO nova.compute.manager [None req-c94b7674-fcac-4846-a576-86d656f0ea0a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Took 0.36 seconds to destroy the instance on the hypervisor.#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.157 2 DEBUG oslo.service.loopingcall [None req-c94b7674-fcac-4846-a576-86d656f0ea0a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.158 2 DEBUG nova.compute.manager [-] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.158 2 DEBUG nova.network.neutron [-] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.617 2 DEBUG nova.network.neutron [-] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.650 2 INFO nova.compute.manager [-] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Took 0.49 seconds to deallocate network for instance.#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.692 2 DEBUG oslo_concurrency.lockutils [None req-c94b7674-fcac-4846-a576-86d656f0ea0a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.692 2 DEBUG oslo_concurrency.lockutils [None req-c94b7674-fcac-4846-a576-86d656f0ea0a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.744 2 DEBUG nova.compute.provider_tree [None req-c94b7674-fcac-4846-a576-86d656f0ea0a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.763 2 DEBUG nova.scheduler.client.report [None req-c94b7674-fcac-4846-a576-86d656f0ea0a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.786 2 DEBUG oslo_concurrency.lockutils [None req-c94b7674-fcac-4846-a576-86d656f0ea0a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.814 2 INFO nova.scheduler.client.report [None req-c94b7674-fcac-4846-a576-86d656f0ea0a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Deleted allocations for instance 430d4b72-3a50-4985-aa59-b15ad0d05c6b#033[00m
Oct 13 11:59:40 np0005485008 nova_compute[192512]: 2025-10-13 15:59:40.877 2 DEBUG oslo_concurrency.lockutils [None req-c94b7674-fcac-4846-a576-86d656f0ea0a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "430d4b72-3a50-4985-aa59-b15ad0d05c6b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:59:41 np0005485008 nova_compute[192512]: 2025-10-13 15:59:41.500 2 DEBUG nova.compute.manager [req-96e765a5-3b68-49b9-b858-bb8d83598c72 req-1c7c9646-9f0e-42d8-a293-eaedec2d3fdd 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Received event network-vif-deleted-7a2143c2-f397-465e-b973-2be27a892e0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:59:41 np0005485008 nova_compute[192512]: 2025-10-13 15:59:41.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:42 np0005485008 nova_compute[192512]: 2025-10-13 15:59:42.140 2 DEBUG nova.compute.manager [req-f5fff18b-0901-4f9e-a167-2a1e518b2ab4 req-4adbc37c-b497-4220-b97c-003ef478a7a1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Received event network-vif-plugged-7a2143c2-f397-465e-b973-2be27a892e0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 11:59:42 np0005485008 nova_compute[192512]: 2025-10-13 15:59:42.141 2 DEBUG oslo_concurrency.lockutils [req-f5fff18b-0901-4f9e-a167-2a1e518b2ab4 req-4adbc37c-b497-4220-b97c-003ef478a7a1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "430d4b72-3a50-4985-aa59-b15ad0d05c6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 11:59:42 np0005485008 nova_compute[192512]: 2025-10-13 15:59:42.141 2 DEBUG oslo_concurrency.lockutils [req-f5fff18b-0901-4f9e-a167-2a1e518b2ab4 req-4adbc37c-b497-4220-b97c-003ef478a7a1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "430d4b72-3a50-4985-aa59-b15ad0d05c6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 11:59:42 np0005485008 nova_compute[192512]: 2025-10-13 15:59:42.142 2 DEBUG oslo_concurrency.lockutils [req-f5fff18b-0901-4f9e-a167-2a1e518b2ab4 req-4adbc37c-b497-4220-b97c-003ef478a7a1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "430d4b72-3a50-4985-aa59-b15ad0d05c6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 11:59:42 np0005485008 nova_compute[192512]: 2025-10-13 15:59:42.142 2 DEBUG nova.compute.manager [req-f5fff18b-0901-4f9e-a167-2a1e518b2ab4 req-4adbc37c-b497-4220-b97c-003ef478a7a1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] No waiting events found dispatching network-vif-plugged-7a2143c2-f397-465e-b973-2be27a892e0c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 11:59:42 np0005485008 nova_compute[192512]: 2025-10-13 15:59:42.142 2 WARNING nova.compute.manager [req-f5fff18b-0901-4f9e-a167-2a1e518b2ab4 req-4adbc37c-b497-4220-b97c-003ef478a7a1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Received unexpected event network-vif-plugged-7a2143c2-f397-465e-b973-2be27a892e0c for instance with vm_state deleted and task_state None.#033[00m
Oct 13 11:59:45 np0005485008 nova_compute[192512]: 2025-10-13 15:59:45.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:46 np0005485008 nova_compute[192512]: 2025-10-13 15:59:46.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:59:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 11:59:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:59:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:59:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:59:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 11:59:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:59:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 11:59:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:59:49 np0005485008 openstack_network_exporter[205063]: ERROR   15:59:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 11:59:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 11:59:50 np0005485008 nova_compute[192512]: 2025-10-13 15:59:50.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:51 np0005485008 nova_compute[192512]: 2025-10-13 15:59:51.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:51 np0005485008 nova_compute[192512]: 2025-10-13 15:59:51.780 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760371176.7779942, 1099298b-0725-435d-ae44-ced74a5c30ef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:59:51 np0005485008 nova_compute[192512]: 2025-10-13 15:59:51.781 2 INFO nova.compute.manager [-] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] VM Stopped (Lifecycle Event)#033[00m
Oct 13 11:59:51 np0005485008 nova_compute[192512]: 2025-10-13 15:59:51.842 2 DEBUG nova.compute.manager [None req-604a2605-ea37-441d-8350-633ce62f40bc - - - - - -] [instance: 1099298b-0725-435d-ae44-ced74a5c30ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:59:54 np0005485008 podman[221601]: 2025-10-13 15:59:54.777985861 +0000 UTC m=+0.076593276 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 11:59:54 np0005485008 podman[221604]: 2025-10-13 15:59:54.78335487 +0000 UTC m=+0.070072322 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 11:59:54 np0005485008 podman[221602]: 2025-10-13 15:59:54.804558016 +0000 UTC m=+0.098753313 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid)
Oct 13 11:59:54 np0005485008 podman[221603]: 2025-10-13 15:59:54.811612498 +0000 UTC m=+0.099475915 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 13 11:59:54 np0005485008 podman[221611]: 2025-10-13 15:59:54.840666611 +0000 UTC m=+0.120019351 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 11:59:55 np0005485008 nova_compute[192512]: 2025-10-13 15:59:55.060 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760371180.057074, 430d4b72-3a50-4985-aa59-b15ad0d05c6b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 11:59:55 np0005485008 nova_compute[192512]: 2025-10-13 15:59:55.060 2 INFO nova.compute.manager [-] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] VM Stopped (Lifecycle Event)#033[00m
Oct 13 11:59:55 np0005485008 nova_compute[192512]: 2025-10-13 15:59:55.087 2 DEBUG nova.compute.manager [None req-72003e46-7b0a-4075-bd36-be0f7401cede - - - - - -] [instance: 430d4b72-3a50-4985-aa59-b15ad0d05c6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 11:59:55 np0005485008 nova_compute[192512]: 2025-10-13 15:59:55.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 11:59:56 np0005485008 nova_compute[192512]: 2025-10-13 15:59:56.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:00 np0005485008 nova_compute[192512]: 2025-10-13 16:00:00.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:01 np0005485008 nova_compute[192512]: 2025-10-13 16:00:01.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:05 np0005485008 nova_compute[192512]: 2025-10-13 16:00:05.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:05 np0005485008 podman[202884]: time="2025-10-13T16:00:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:00:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:00:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:00:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:00:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3005 "" "Go-http-client/1.1"
Oct 13 12:00:06 np0005485008 nova_compute[192512]: 2025-10-13 16:00:06.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:08 np0005485008 podman[221704]: 2025-10-13 16:00:08.760526607 +0000 UTC m=+0.062775653 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vendor=Red Hat, Inc., release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 12:00:09 np0005485008 nova_compute[192512]: 2025-10-13 16:00:09.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:00:09 np0005485008 nova_compute[192512]: 2025-10-13 16:00:09.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:00:10 np0005485008 nova_compute[192512]: 2025-10-13 16:00:10.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:11 np0005485008 nova_compute[192512]: 2025-10-13 16:00:11.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:15 np0005485008 nova_compute[192512]: 2025-10-13 16:00:15.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:15 np0005485008 nova_compute[192512]: 2025-10-13 16:00:15.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:00:16 np0005485008 nova_compute[192512]: 2025-10-13 16:00:16.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:17 np0005485008 nova_compute[192512]: 2025-10-13 16:00:17.437 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:00:17 np0005485008 nova_compute[192512]: 2025-10-13 16:00:17.438 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:00:18 np0005485008 nova_compute[192512]: 2025-10-13 16:00:18.304 2 DEBUG oslo_concurrency.lockutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:00:18 np0005485008 nova_compute[192512]: 2025-10-13 16:00:18.304 2 DEBUG oslo_concurrency.lockutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:00:18 np0005485008 nova_compute[192512]: 2025-10-13 16:00:18.352 2 DEBUG nova.compute.manager [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 13 12:00:18 np0005485008 nova_compute[192512]: 2025-10-13 16:00:18.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:00:18 np0005485008 nova_compute[192512]: 2025-10-13 16:00:18.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:00:18 np0005485008 nova_compute[192512]: 2025-10-13 16:00:18.495 2 DEBUG oslo_concurrency.lockutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:00:18 np0005485008 nova_compute[192512]: 2025-10-13 16:00:18.496 2 DEBUG oslo_concurrency.lockutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:00:18 np0005485008 nova_compute[192512]: 2025-10-13 16:00:18.503 2 DEBUG nova.virt.hardware [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 13 12:00:18 np0005485008 nova_compute[192512]: 2025-10-13 16:00:18.504 2 INFO nova.compute.claims [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct 13 12:00:18 np0005485008 nova_compute[192512]: 2025-10-13 16:00:18.658 2 DEBUG nova.compute.provider_tree [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:00:18 np0005485008 nova_compute[192512]: 2025-10-13 16:00:18.673 2 DEBUG nova.scheduler.client.report [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:00:18 np0005485008 nova_compute[192512]: 2025-10-13 16:00:18.712 2 DEBUG oslo_concurrency.lockutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:00:18 np0005485008 nova_compute[192512]: 2025-10-13 16:00:18.713 2 DEBUG nova.compute.manager [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 13 12:00:18 np0005485008 nova_compute[192512]: 2025-10-13 16:00:18.772 2 DEBUG nova.compute.manager [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 13 12:00:18 np0005485008 nova_compute[192512]: 2025-10-13 16:00:18.772 2 DEBUG nova.network.neutron [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 13 12:00:18 np0005485008 nova_compute[192512]: 2025-10-13 16:00:18.796 2 INFO nova.virt.libvirt.driver [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 13 12:00:18 np0005485008 nova_compute[192512]: 2025-10-13 16:00:18.827 2 DEBUG nova.compute.manager [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 13 12:00:18 np0005485008 nova_compute[192512]: 2025-10-13 16:00:18.907 2 DEBUG nova.compute.manager [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 13 12:00:18 np0005485008 nova_compute[192512]: 2025-10-13 16:00:18.909 2 DEBUG nova.virt.libvirt.driver [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 13 12:00:18 np0005485008 nova_compute[192512]: 2025-10-13 16:00:18.909 2 INFO nova.virt.libvirt.driver [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Creating image(s)#033[00m
Oct 13 12:00:18 np0005485008 nova_compute[192512]: 2025-10-13 16:00:18.910 2 DEBUG oslo_concurrency.lockutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "/var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:00:18 np0005485008 nova_compute[192512]: 2025-10-13 16:00:18.910 2 DEBUG oslo_concurrency.lockutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "/var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:00:18 np0005485008 nova_compute[192512]: 2025-10-13 16:00:18.910 2 DEBUG oslo_concurrency.lockutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "/var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:00:18 np0005485008 nova_compute[192512]: 2025-10-13 16:00:18.923 2 DEBUG oslo_concurrency.processutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:00:19 np0005485008 nova_compute[192512]: 2025-10-13 16:00:19.004 2 DEBUG oslo_concurrency.processutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:00:19 np0005485008 nova_compute[192512]: 2025-10-13 16:00:19.005 2 DEBUG oslo_concurrency.lockutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:00:19 np0005485008 nova_compute[192512]: 2025-10-13 16:00:19.006 2 DEBUG oslo_concurrency.lockutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:00:19 np0005485008 nova_compute[192512]: 2025-10-13 16:00:19.018 2 DEBUG oslo_concurrency.processutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:00:19 np0005485008 nova_compute[192512]: 2025-10-13 16:00:19.078 2 DEBUG oslo_concurrency.processutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:00:19 np0005485008 nova_compute[192512]: 2025-10-13 16:00:19.080 2 DEBUG oslo_concurrency.processutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:00:19 np0005485008 nova_compute[192512]: 2025-10-13 16:00:19.116 2 DEBUG oslo_concurrency.processutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f/disk 1073741824" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:00:19 np0005485008 nova_compute[192512]: 2025-10-13 16:00:19.118 2 DEBUG oslo_concurrency.lockutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:00:19 np0005485008 nova_compute[192512]: 2025-10-13 16:00:19.118 2 DEBUG oslo_concurrency.processutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:00:19 np0005485008 nova_compute[192512]: 2025-10-13 16:00:19.193 2 DEBUG oslo_concurrency.processutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:00:19 np0005485008 nova_compute[192512]: 2025-10-13 16:00:19.195 2 DEBUG nova.virt.disk.api [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Checking if we can resize image /var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 12:00:19 np0005485008 nova_compute[192512]: 2025-10-13 16:00:19.195 2 DEBUG oslo_concurrency.processutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:00:19 np0005485008 nova_compute[192512]: 2025-10-13 16:00:19.277 2 DEBUG oslo_concurrency.processutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:00:19 np0005485008 nova_compute[192512]: 2025-10-13 16:00:19.278 2 DEBUG nova.virt.disk.api [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Cannot resize image /var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 12:00:19 np0005485008 nova_compute[192512]: 2025-10-13 16:00:19.279 2 DEBUG nova.objects.instance [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lazy-loading 'migration_context' on Instance uuid 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:00:19 np0005485008 nova_compute[192512]: 2025-10-13 16:00:19.298 2 DEBUG nova.virt.libvirt.driver [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 13 12:00:19 np0005485008 nova_compute[192512]: 2025-10-13 16:00:19.299 2 DEBUG nova.virt.libvirt.driver [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Ensure instance console log exists: /var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 13 12:00:19 np0005485008 nova_compute[192512]: 2025-10-13 16:00:19.300 2 DEBUG oslo_concurrency.lockutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:00:19 np0005485008 nova_compute[192512]: 2025-10-13 16:00:19.301 2 DEBUG oslo_concurrency.lockutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:00:19 np0005485008 nova_compute[192512]: 2025-10-13 16:00:19.301 2 DEBUG oslo_concurrency.lockutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:00:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:00:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:00:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:00:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:00:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:00:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:00:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:00:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:00:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:00:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:00:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:00:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:00:19 np0005485008 nova_compute[192512]: 2025-10-13 16:00:19.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:00:20 np0005485008 nova_compute[192512]: 2025-10-13 16:00:20.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:20 np0005485008 nova_compute[192512]: 2025-10-13 16:00:20.211 2 DEBUG nova.network.neutron [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Successfully created port: 50291ee1-d049-4960-a775-5ebfbc422214 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 13 12:00:20 np0005485008 nova_compute[192512]: 2025-10-13 16:00:20.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:20.764 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:00:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:20.765 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 12:00:21 np0005485008 nova_compute[192512]: 2025-10-13 16:00:21.049 2 DEBUG nova.network.neutron [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Successfully updated port: 50291ee1-d049-4960-a775-5ebfbc422214 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 13 12:00:21 np0005485008 nova_compute[192512]: 2025-10-13 16:00:21.067 2 DEBUG oslo_concurrency.lockutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "refresh_cache-3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:00:21 np0005485008 nova_compute[192512]: 2025-10-13 16:00:21.068 2 DEBUG oslo_concurrency.lockutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquired lock "refresh_cache-3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:00:21 np0005485008 nova_compute[192512]: 2025-10-13 16:00:21.068 2 DEBUG nova.network.neutron [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 12:00:21 np0005485008 nova_compute[192512]: 2025-10-13 16:00:21.170 2 DEBUG nova.compute.manager [req-bd9ab79b-2a36-4e77-bce3-cd5c3fb533bc req-c51378b4-efe1-4d75-857f-46f8fd29dc6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Received event network-changed-50291ee1-d049-4960-a775-5ebfbc422214 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:00:21 np0005485008 nova_compute[192512]: 2025-10-13 16:00:21.170 2 DEBUG nova.compute.manager [req-bd9ab79b-2a36-4e77-bce3-cd5c3fb533bc req-c51378b4-efe1-4d75-857f-46f8fd29dc6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Refreshing instance network info cache due to event network-changed-50291ee1-d049-4960-a775-5ebfbc422214. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 13 12:00:21 np0005485008 nova_compute[192512]: 2025-10-13 16:00:21.171 2 DEBUG oslo_concurrency.lockutils [req-bd9ab79b-2a36-4e77-bce3-cd5c3fb533bc req-c51378b4-efe1-4d75-857f-46f8fd29dc6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:00:21 np0005485008 nova_compute[192512]: 2025-10-13 16:00:21.241 2 DEBUG nova.network.neutron [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 13 12:00:21 np0005485008 nova_compute[192512]: 2025-10-13 16:00:21.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.261 2 DEBUG nova.network.neutron [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Updating instance_info_cache with network_info: [{"id": "50291ee1-d049-4960-a775-5ebfbc422214", "address": "fa:16:3e:33:e8:6d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50291ee1-d0", "ovs_interfaceid": "50291ee1-d049-4960-a775-5ebfbc422214", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.278 2 DEBUG oslo_concurrency.lockutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Releasing lock "refresh_cache-3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.278 2 DEBUG nova.compute.manager [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Instance network_info: |[{"id": "50291ee1-d049-4960-a775-5ebfbc422214", "address": "fa:16:3e:33:e8:6d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50291ee1-d0", "ovs_interfaceid": "50291ee1-d049-4960-a775-5ebfbc422214", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.279 2 DEBUG oslo_concurrency.lockutils [req-bd9ab79b-2a36-4e77-bce3-cd5c3fb533bc req-c51378b4-efe1-4d75-857f-46f8fd29dc6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.279 2 DEBUG nova.network.neutron [req-bd9ab79b-2a36-4e77-bce3-cd5c3fb533bc req-c51378b4-efe1-4d75-857f-46f8fd29dc6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Refreshing network info cache for port 50291ee1-d049-4960-a775-5ebfbc422214 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.282 2 DEBUG nova.virt.libvirt.driver [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Start _get_guest_xml network_info=[{"id": "50291ee1-d049-4960-a775-5ebfbc422214", "address": "fa:16:3e:33:e8:6d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50291ee1-d0", "ovs_interfaceid": "50291ee1-d049-4960-a775-5ebfbc422214", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'boot_index': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': 'dcd9fbd3-16ab-46e1-976e-0576b433c9d5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.287 2 WARNING nova.virt.libvirt.driver [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.294 2 DEBUG nova.virt.libvirt.host [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.294 2 DEBUG nova.virt.libvirt.host [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.300 2 DEBUG nova.virt.libvirt.host [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.301 2 DEBUG nova.virt.libvirt.host [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.301 2 DEBUG nova.virt.libvirt.driver [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.301 2 DEBUG nova.virt.hardware [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-13T15:39:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ddff5c88-cac9-460e-8ffb-1e9b9a7c2a59',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.302 2 DEBUG nova.virt.hardware [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.302 2 DEBUG nova.virt.hardware [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.302 2 DEBUG nova.virt.hardware [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.302 2 DEBUG nova.virt.hardware [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.303 2 DEBUG nova.virt.hardware [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.303 2 DEBUG nova.virt.hardware [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.303 2 DEBUG nova.virt.hardware [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.303 2 DEBUG nova.virt.hardware [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.304 2 DEBUG nova.virt.hardware [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.304 2 DEBUG nova.virt.hardware [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.308 2 DEBUG nova.virt.libvirt.vif [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T16:00:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-311358289',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-311358289',id=18,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-8c1u80pb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T16:00:18Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50291ee1-d049-4960-a775-5ebfbc422214", "address": "fa:16:3e:33:e8:6d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50291ee1-d0", "ovs_interfaceid": "50291ee1-d049-4960-a775-5ebfbc422214", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.309 2 DEBUG nova.network.os_vif_util [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converting VIF {"id": "50291ee1-d049-4960-a775-5ebfbc422214", "address": "fa:16:3e:33:e8:6d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50291ee1-d0", "ovs_interfaceid": "50291ee1-d049-4960-a775-5ebfbc422214", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.310 2 DEBUG nova.network.os_vif_util [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:e8:6d,bridge_name='br-int',has_traffic_filtering=True,id=50291ee1-d049-4960-a775-5ebfbc422214,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50291ee1-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.311 2 DEBUG nova.objects.instance [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.324 2 DEBUG nova.virt.libvirt.driver [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] End _get_guest_xml xml=<domain type="kvm">
Oct 13 12:00:22 np0005485008 nova_compute[192512]:  <uuid>3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f</uuid>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:  <name>instance-00000012</name>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:  <memory>131072</memory>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:  <vcpu>1</vcpu>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:  <metadata>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <nova:name>tempest-TestExecuteStrategies-server-311358289</nova:name>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <nova:creationTime>2025-10-13 16:00:22</nova:creationTime>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <nova:flavor name="m1.nano">
Oct 13 12:00:22 np0005485008 nova_compute[192512]:        <nova:memory>128</nova:memory>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:        <nova:disk>1</nova:disk>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:        <nova:swap>0</nova:swap>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:        <nova:ephemeral>0</nova:ephemeral>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:        <nova:vcpus>1</nova:vcpus>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      </nova:flavor>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <nova:owner>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:        <nova:user uuid="3f85e781b03b405795a2079908bd2792">tempest-TestExecuteStrategies-1416319229-project-admin</nova:user>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:        <nova:project uuid="4d9418fd42c841d38cbfc7819a3fca65">tempest-TestExecuteStrategies-1416319229</nova:project>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      </nova:owner>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <nova:root type="image" uuid="dcd9fbd3-16ab-46e1-976e-0576b433c9d5"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <nova:ports>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:        <nova:port uuid="50291ee1-d049-4960-a775-5ebfbc422214">
Oct 13 12:00:22 np0005485008 nova_compute[192512]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:        </nova:port>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      </nova:ports>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    </nova:instance>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:  </metadata>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:  <sysinfo type="smbios">
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <system>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <entry name="manufacturer">RDO</entry>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <entry name="product">OpenStack Compute</entry>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <entry name="serial">3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f</entry>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <entry name="uuid">3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f</entry>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <entry name="family">Virtual Machine</entry>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    </system>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:  </sysinfo>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:  <os>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <boot dev="hd"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <smbios mode="sysinfo"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:  </os>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:  <features>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <acpi/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <apic/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <vmcoreinfo/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:  </features>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:  <clock offset="utc">
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <timer name="pit" tickpolicy="delay"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <timer name="hpet" present="no"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:  </clock>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:  <cpu mode="host-model" match="exact">
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <topology sockets="1" cores="1" threads="1"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:  </cpu>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:  <devices>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <disk type="file" device="disk">
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <driver name="qemu" type="qcow2" cache="none"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f/disk"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <target dev="vda" bus="virtio"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    </disk>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <disk type="file" device="cdrom">
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <driver name="qemu" type="raw" cache="none"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f/disk.config"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <target dev="sda" bus="sata"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    </disk>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <interface type="ethernet">
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <mac address="fa:16:3e:33:e8:6d"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <driver name="vhost" rx_queue_size="512"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <mtu size="1442"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <target dev="tap50291ee1-d0"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    </interface>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <serial type="pty">
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <log file="/var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f/console.log" append="off"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    </serial>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <video>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    </video>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <input type="tablet" bus="usb"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <rng model="virtio">
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <backend model="random">/dev/urandom</backend>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    </rng>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <controller type="usb" index="0"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    <memballoon model="virtio">
Oct 13 12:00:22 np0005485008 nova_compute[192512]:      <stats period="10"/>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:    </memballoon>
Oct 13 12:00:22 np0005485008 nova_compute[192512]:  </devices>
Oct 13 12:00:22 np0005485008 nova_compute[192512]: </domain>
Oct 13 12:00:22 np0005485008 nova_compute[192512]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.326 2 DEBUG nova.compute.manager [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Preparing to wait for external event network-vif-plugged-50291ee1-d049-4960-a775-5ebfbc422214 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.326 2 DEBUG oslo_concurrency.lockutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.326 2 DEBUG oslo_concurrency.lockutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.326 2 DEBUG oslo_concurrency.lockutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.327 2 DEBUG nova.virt.libvirt.vif [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T16:00:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-311358289',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-311358289',id=18,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-8c1u80pb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T16:00:18Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50291ee1-d049-4960-a775-5ebfbc422214", "address": "fa:16:3e:33:e8:6d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50291ee1-d0", "ovs_interfaceid": "50291ee1-d049-4960-a775-5ebfbc422214", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.327 2 DEBUG nova.network.os_vif_util [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converting VIF {"id": "50291ee1-d049-4960-a775-5ebfbc422214", "address": "fa:16:3e:33:e8:6d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50291ee1-d0", "ovs_interfaceid": "50291ee1-d049-4960-a775-5ebfbc422214", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.328 2 DEBUG nova.network.os_vif_util [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:e8:6d,bridge_name='br-int',has_traffic_filtering=True,id=50291ee1-d049-4960-a775-5ebfbc422214,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50291ee1-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.329 2 DEBUG os_vif [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:e8:6d,bridge_name='br-int',has_traffic_filtering=True,id=50291ee1-d049-4960-a775-5ebfbc422214,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50291ee1-d0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.330 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.330 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.334 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50291ee1-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.335 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap50291ee1-d0, col_values=(('external_ids', {'iface-id': '50291ee1-d049-4960-a775-5ebfbc422214', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:e8:6d', 'vm-uuid': '3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:22 np0005485008 NetworkManager[51587]: <info>  [1760371222.3376] manager: (tap50291ee1-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.343 2 INFO os_vif [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:e8:6d,bridge_name='br-int',has_traffic_filtering=True,id=50291ee1-d049-4960-a775-5ebfbc422214,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50291ee1-d0')#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.412 2 DEBUG nova.virt.libvirt.driver [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.413 2 DEBUG nova.virt.libvirt.driver [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.413 2 DEBUG nova.virt.libvirt.driver [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] No VIF found with MAC fa:16:3e:33:e8:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.414 2 INFO nova.virt.libvirt.driver [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Using config drive#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.426 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.464 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.465 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.465 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.465 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.539 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.600 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.602 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.663 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.665 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Periodic task is updating the host stat, it is trying to get disk instance-00000012, but disk file was removed by concurrent operations such as resize.: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f/disk.config'#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.800 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.802 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5857MB free_disk=73.46577072143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.802 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.803 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.863 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.864 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.864 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.899 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.925 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.951 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.951 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.954 2 INFO nova.virt.libvirt.driver [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Creating config drive at /var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f/disk.config#033[00m
Oct 13 12:00:22 np0005485008 nova_compute[192512]: 2025-10-13 16:00:22.958 2 DEBUG oslo_concurrency.processutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd_qdwyrt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.086 2 DEBUG oslo_concurrency.processutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd_qdwyrt" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:00:23 np0005485008 NetworkManager[51587]: <info>  [1760371223.1499] manager: (tap50291ee1-d0): new Tun device (/org/freedesktop/NetworkManager/Devices/72)
Oct 13 12:00:23 np0005485008 kernel: tap50291ee1-d0: entered promiscuous mode
Oct 13 12:00:23 np0005485008 ovn_controller[94758]: 2025-10-13T16:00:23Z|00195|binding|INFO|Claiming lport 50291ee1-d049-4960-a775-5ebfbc422214 for this chassis.
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:23 np0005485008 ovn_controller[94758]: 2025-10-13T16:00:23Z|00196|binding|INFO|50291ee1-d049-4960-a775-5ebfbc422214: Claiming fa:16:3e:33:e8:6d 10.100.0.14
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.164 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:e8:6d 10.100.0.14'], port_security=['fa:16:3e:33:e8:6d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=50291ee1-d049-4960-a775-5ebfbc422214) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.165 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 50291ee1-d049-4960-a775-5ebfbc422214 in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae bound to our chassis#033[00m
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.166 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae#033[00m
Oct 13 12:00:23 np0005485008 ovn_controller[94758]: 2025-10-13T16:00:23Z|00197|binding|INFO|Setting lport 50291ee1-d049-4960-a775-5ebfbc422214 ovn-installed in OVS
Oct 13 12:00:23 np0005485008 ovn_controller[94758]: 2025-10-13T16:00:23Z|00198|binding|INFO|Setting lport 50291ee1-d049-4960-a775-5ebfbc422214 up in Southbound
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.179 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d1a9f3-362d-4f7d-af7e-3f90c4e9410b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.180 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap39a43da9-c1 in ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.182 214965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap39a43da9-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.182 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[565936f3-1d02-4292-9b7d-f0788da0703f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.183 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[a279ead7-a605-4068-91db-7fa2dd0feefd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:00:23 np0005485008 systemd-udevd[221768]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 12:00:23 np0005485008 systemd-machined[152551]: New machine qemu-16-instance-00000012.
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.196 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[e73cceeb-faa0-41df-a79d-c5dfc32e9f8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:00:23 np0005485008 NetworkManager[51587]: <info>  [1760371223.1981] device (tap50291ee1-d0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 12:00:23 np0005485008 NetworkManager[51587]: <info>  [1760371223.1993] device (tap50291ee1-d0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 12:00:23 np0005485008 systemd[1]: Started Virtual Machine qemu-16-instance-00000012.
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.212 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d133e4-ee9d-41bf-b7c0-3a6c156fb565]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.243 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[400a22f7-35c4-4144-89c3-20b8f7ea4d52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:00:23 np0005485008 NetworkManager[51587]: <info>  [1760371223.2500] manager: (tap39a43da9-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/73)
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.249 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[ab05af5f-082a-4b07-a7a1-6cdc0c3eb474]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.280 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[da1e8cd8-6a13-41a1-86b1-e2fdd395febe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.284 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[4e2ff536-f468-4d92-8c98-c9aa88bb701c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:00:23 np0005485008 NetworkManager[51587]: <info>  [1760371223.3128] device (tap39a43da9-c0): carrier: link connected
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.318 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[de8f563d-e4b9-4610-bba8-2a74e9682637]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.340 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[f427fa5e-1d8e-456d-82ee-6dcf431fde4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476097, 'reachable_time': 28694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 221801, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.359 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[eaa3460c-99c2-442a-aff1-31bf003f4366]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:43e9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476097, 'tstamp': 476097}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 221802, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.378 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[be6719c6-2ccd-4bbf-a419-7ddb4a29c7c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476097, 'reachable_time': 28694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 221803, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.384 2 DEBUG nova.compute.manager [req-c68a5893-f3cb-489d-8249-afb89c899cc1 req-94ae0e3b-20b9-432d-a0bc-12b18d78d2d6 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Received event network-vif-plugged-50291ee1-d049-4960-a775-5ebfbc422214 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.385 2 DEBUG oslo_concurrency.lockutils [req-c68a5893-f3cb-489d-8249-afb89c899cc1 req-94ae0e3b-20b9-432d-a0bc-12b18d78d2d6 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.385 2 DEBUG oslo_concurrency.lockutils [req-c68a5893-f3cb-489d-8249-afb89c899cc1 req-94ae0e3b-20b9-432d-a0bc-12b18d78d2d6 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.386 2 DEBUG oslo_concurrency.lockutils [req-c68a5893-f3cb-489d-8249-afb89c899cc1 req-94ae0e3b-20b9-432d-a0bc-12b18d78d2d6 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.386 2 DEBUG nova.compute.manager [req-c68a5893-f3cb-489d-8249-afb89c899cc1 req-94ae0e3b-20b9-432d-a0bc-12b18d78d2d6 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Processing event network-vif-plugged-50291ee1-d049-4960-a775-5ebfbc422214 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.417 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[941a2d99-9330-4375-9756-424ea0930abc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.487 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[dd0a48f4-2153-4d5e-9faf-dd0d49b03506]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.489 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.489 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.489 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39a43da9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.491 2 DEBUG nova.network.neutron [req-bd9ab79b-2a36-4e77-bce3-cd5c3fb533bc req-c51378b4-efe1-4d75-857f-46f8fd29dc6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Updated VIF entry in instance network info cache for port 50291ee1-d049-4960-a775-5ebfbc422214. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 13 12:00:23 np0005485008 NetworkManager[51587]: <info>  [1760371223.4920] manager: (tap39a43da9-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.492 2 DEBUG nova.network.neutron [req-bd9ab79b-2a36-4e77-bce3-cd5c3fb533bc req-c51378b4-efe1-4d75-857f-46f8fd29dc6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Updating instance_info_cache with network_info: [{"id": "50291ee1-d049-4960-a775-5ebfbc422214", "address": "fa:16:3e:33:e8:6d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50291ee1-d0", "ovs_interfaceid": "50291ee1-d049-4960-a775-5ebfbc422214", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:00:23 np0005485008 kernel: tap39a43da9-c0: entered promiscuous mode
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.496 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39a43da9-c0, col_values=(('external_ids', {'iface-id': '5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:23 np0005485008 ovn_controller[94758]: 2025-10-13T16:00:23Z|00199|binding|INFO|Releasing lport 5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182 from this chassis (sb_readonly=0)
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.499 103642 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/39a43da9-cf4c-4fe3-ab73-bf8705320dae.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/39a43da9-cf4c-4fe3-ab73-bf8705320dae.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.500 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[aac8bee4-e9be-48da-9a70-b3ba989233dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.501 103642 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: global
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]:    log         /dev/log local0 debug
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]:    log-tag     haproxy-metadata-proxy-39a43da9-cf4c-4fe3-ab73-bf8705320dae
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]:    user        root
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]:    group       root
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]:    maxconn     1024
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]:    pidfile     /var/lib/neutron/external/pids/39a43da9-cf4c-4fe3-ab73-bf8705320dae.pid.haproxy
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]:    daemon
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: defaults
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]:    log global
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]:    mode http
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]:    option httplog
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]:    option dontlognull
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]:    option http-server-close
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]:    option forwardfor
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]:    retries                 3
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]:    timeout http-request    30s
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]:    timeout connect         30s
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]:    timeout client          32s
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]:    timeout server          32s
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]:    timeout http-keep-alive 30s
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: listen listener
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]:    bind 169.254.169.254:80
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]:    server metadata /var/lib/neutron/metadata_proxy
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]:    http-request add-header X-OVN-Network-ID 39a43da9-cf4c-4fe3-ab73-bf8705320dae
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 13 12:00:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:23.502 103642 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'env', 'PROCESS_TAG=haproxy-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/39a43da9-cf4c-4fe3-ab73-bf8705320dae.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.514 2 DEBUG oslo_concurrency.lockutils [req-bd9ab79b-2a36-4e77-bce3-cd5c3fb533bc req-c51378b4-efe1-4d75-857f-46f8fd29dc6b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:00:23 np0005485008 podman[221842]: 2025-10-13 16:00:23.891841738 +0000 UTC m=+0.058812439 container create 578e26223d437cf1c6527281e3ab75160bcc25bd850d47aafd765f53ac731c4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.902 2 DEBUG nova.compute.manager [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.903 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760371223.9013515, 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.903 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] VM Started (Lifecycle Event)#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.906 2 DEBUG nova.virt.libvirt.driver [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.909 2 INFO nova.virt.libvirt.driver [-] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Instance spawned successfully.#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.909 2 DEBUG nova.virt.libvirt.driver [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.928 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.933 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.938 2 DEBUG nova.virt.libvirt.driver [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.938 2 DEBUG nova.virt.libvirt.driver [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 12:00:23 np0005485008 systemd[1]: Started libpod-conmon-578e26223d437cf1c6527281e3ab75160bcc25bd850d47aafd765f53ac731c4f.scope.
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.940 2 DEBUG nova.virt.libvirt.driver [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.941 2 DEBUG nova.virt.libvirt.driver [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.941 2 DEBUG nova.virt.libvirt.driver [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.941 2 DEBUG nova.virt.libvirt.driver [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 12:00:23 np0005485008 podman[221842]: 2025-10-13 16:00:23.855739904 +0000 UTC m=+0.022710635 image pull f5d8e43d5fd113645e71c7e8980543c233298a7561a4c95ab3af27cb119e70b7 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.952 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.953 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.953 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.966 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.967 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760371223.9025974, 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.967 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] VM Paused (Lifecycle Event)#033[00m
Oct 13 12:00:23 np0005485008 systemd[1]: Started libcrun container.
Oct 13 12:00:23 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81c928b5b67828a563c2b048e8c46fbc765eed72c10dbceb5b65e2ea65080b49/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.973 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.974 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:00:23 np0005485008 podman[221842]: 2025-10-13 16:00:23.987708179 +0000 UTC m=+0.154678910 container init 578e26223d437cf1c6527281e3ab75160bcc25bd850d47aafd765f53ac731c4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 12:00:23 np0005485008 podman[221842]: 2025-10-13 16:00:23.994544493 +0000 UTC m=+0.161515194 container start 578e26223d437cf1c6527281e3ab75160bcc25bd850d47aafd765f53ac731c4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 12:00:23 np0005485008 nova_compute[192512]: 2025-10-13 16:00:23.995 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:00:24 np0005485008 nova_compute[192512]: 2025-10-13 16:00:24.001 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760371223.9057646, 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:00:24 np0005485008 nova_compute[192512]: 2025-10-13 16:00:24.001 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] VM Resumed (Lifecycle Event)#033[00m
Oct 13 12:00:24 np0005485008 nova_compute[192512]: 2025-10-13 16:00:24.005 2 INFO nova.compute.manager [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Took 5.10 seconds to spawn the instance on the hypervisor.#033[00m
Oct 13 12:00:24 np0005485008 nova_compute[192512]: 2025-10-13 16:00:24.006 2 DEBUG nova.compute.manager [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:00:24 np0005485008 nova_compute[192512]: 2025-10-13 16:00:24.019 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:00:24 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[221857]: [NOTICE]   (221861) : New worker (221863) forked
Oct 13 12:00:24 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[221857]: [NOTICE]   (221861) : Loading success.
Oct 13 12:00:24 np0005485008 nova_compute[192512]: 2025-10-13 16:00:24.024 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 12:00:24 np0005485008 nova_compute[192512]: 2025-10-13 16:00:24.053 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 13 12:00:24 np0005485008 nova_compute[192512]: 2025-10-13 16:00:24.075 2 INFO nova.compute.manager [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Took 5.64 seconds to build instance.#033[00m
Oct 13 12:00:24 np0005485008 nova_compute[192512]: 2025-10-13 16:00:24.096 2 DEBUG oslo_concurrency.lockutils [None req-095c9c4a-b5ee-4aea-94fb-722975fc41be 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:00:25 np0005485008 nova_compute[192512]: 2025-10-13 16:00:25.465 2 DEBUG nova.compute.manager [req-bf35852a-d13f-498d-8c21-176381b1da8c req-4e66c055-0714-4a93-8abc-b20201c67b3b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Received event network-vif-plugged-50291ee1-d049-4960-a775-5ebfbc422214 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:00:25 np0005485008 nova_compute[192512]: 2025-10-13 16:00:25.465 2 DEBUG oslo_concurrency.lockutils [req-bf35852a-d13f-498d-8c21-176381b1da8c req-4e66c055-0714-4a93-8abc-b20201c67b3b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:00:25 np0005485008 nova_compute[192512]: 2025-10-13 16:00:25.466 2 DEBUG oslo_concurrency.lockutils [req-bf35852a-d13f-498d-8c21-176381b1da8c req-4e66c055-0714-4a93-8abc-b20201c67b3b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:00:25 np0005485008 nova_compute[192512]: 2025-10-13 16:00:25.466 2 DEBUG oslo_concurrency.lockutils [req-bf35852a-d13f-498d-8c21-176381b1da8c req-4e66c055-0714-4a93-8abc-b20201c67b3b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:00:25 np0005485008 nova_compute[192512]: 2025-10-13 16:00:25.467 2 DEBUG nova.compute.manager [req-bf35852a-d13f-498d-8c21-176381b1da8c req-4e66c055-0714-4a93-8abc-b20201c67b3b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] No waiting events found dispatching network-vif-plugged-50291ee1-d049-4960-a775-5ebfbc422214 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:00:25 np0005485008 nova_compute[192512]: 2025-10-13 16:00:25.467 2 WARNING nova.compute.manager [req-bf35852a-d13f-498d-8c21-176381b1da8c req-4e66c055-0714-4a93-8abc-b20201c67b3b 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Received unexpected event network-vif-plugged-50291ee1-d049-4960-a775-5ebfbc422214 for instance with vm_state active and task_state None.#033[00m
Oct 13 12:00:25 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:25.767 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:00:25 np0005485008 podman[221880]: 2025-10-13 16:00:25.777035431 +0000 UTC m=+0.062718531 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 12:00:25 np0005485008 podman[221873]: 2025-10-13 16:00:25.777091663 +0000 UTC m=+0.072385476 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 12:00:25 np0005485008 podman[221874]: 2025-10-13 16:00:25.794673534 +0000 UTC m=+0.074494580 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Oct 13 12:00:25 np0005485008 podman[221872]: 2025-10-13 16:00:25.809588893 +0000 UTC m=+0.109005495 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 13 12:00:25 np0005485008 podman[221887]: 2025-10-13 16:00:25.838403239 +0000 UTC m=+0.107012873 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 12:00:26 np0005485008 nova_compute[192512]: 2025-10-13 16:00:26.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:27 np0005485008 nova_compute[192512]: 2025-10-13 16:00:27.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:28 np0005485008 nova_compute[192512]: 2025-10-13 16:00:28.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:00:28 np0005485008 nova_compute[192512]: 2025-10-13 16:00:28.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 13 12:00:28 np0005485008 nova_compute[192512]: 2025-10-13 16:00:28.443 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 13 12:00:31 np0005485008 nova_compute[192512]: 2025-10-13 16:00:31.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:32 np0005485008 nova_compute[192512]: 2025-10-13 16:00:32.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:33.964 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:00:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:33.965 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:00:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:00:33.966 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:00:34 np0005485008 nova_compute[192512]: 2025-10-13 16:00:34.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:00:34 np0005485008 nova_compute[192512]: 2025-10-13 16:00:34.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 13 12:00:35 np0005485008 podman[202884]: time="2025-10-13T16:00:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:00:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:00:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20779 "" "Go-http-client/1.1"
Oct 13 12:00:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:00:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3466 "" "Go-http-client/1.1"
Oct 13 12:00:36 np0005485008 nova_compute[192512]: 2025-10-13 16:00:36.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:36 np0005485008 ovn_controller[94758]: 2025-10-13T16:00:36Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:33:e8:6d 10.100.0.14
Oct 13 12:00:36 np0005485008 ovn_controller[94758]: 2025-10-13T16:00:36Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:33:e8:6d 10.100.0.14
Oct 13 12:00:37 np0005485008 nova_compute[192512]: 2025-10-13 16:00:37.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:39 np0005485008 podman[221986]: 2025-10-13 16:00:39.782956813 +0000 UTC m=+0.087564662 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, release=1755695350, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 13 12:00:41 np0005485008 nova_compute[192512]: 2025-10-13 16:00:41.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:42 np0005485008 nova_compute[192512]: 2025-10-13 16:00:42.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:46 np0005485008 nova_compute[192512]: 2025-10-13 16:00:46.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:47 np0005485008 nova_compute[192512]: 2025-10-13 16:00:47.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:00:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:00:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:00:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:00:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:00:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:00:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:00:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:00:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:00:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:00:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:00:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:00:51 np0005485008 nova_compute[192512]: 2025-10-13 16:00:51.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:52 np0005485008 nova_compute[192512]: 2025-10-13 16:00:52.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:53 np0005485008 ovn_controller[94758]: 2025-10-13T16:00:53Z|00200|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 13 12:00:56 np0005485008 nova_compute[192512]: 2025-10-13 16:00:56.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:00:56 np0005485008 podman[222007]: 2025-10-13 16:00:56.764735812 +0000 UTC m=+0.063965400 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 12:00:56 np0005485008 podman[222008]: 2025-10-13 16:00:56.779713031 +0000 UTC m=+0.074417858 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251009, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 13 12:00:56 np0005485008 podman[222010]: 2025-10-13 16:00:56.806188124 +0000 UTC m=+0.094418057 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 12:00:56 np0005485008 podman[222009]: 2025-10-13 16:00:56.806784993 +0000 UTC m=+0.098077272 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 13 12:00:56 np0005485008 podman[222016]: 2025-10-13 16:00:56.815574549 +0000 UTC m=+0.096703839 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 12:00:57 np0005485008 nova_compute[192512]: 2025-10-13 16:00:57.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:01 np0005485008 nova_compute[192512]: 2025-10-13 16:01:01.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:02 np0005485008 nova_compute[192512]: 2025-10-13 16:01:02.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:05 np0005485008 podman[202884]: time="2025-10-13T16:01:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:01:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:01:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20779 "" "Go-http-client/1.1"
Oct 13 12:01:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:01:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3474 "" "Go-http-client/1.1"
Oct 13 12:01:06 np0005485008 nova_compute[192512]: 2025-10-13 16:01:06.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:07 np0005485008 nova_compute[192512]: 2025-10-13 16:01:07.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:07 np0005485008 nova_compute[192512]: 2025-10-13 16:01:07.920 2 DEBUG nova.virt.libvirt.driver [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Check if temp file /var/lib/nova/instances/tmp8czd5q66 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Oct 13 12:01:07 np0005485008 nova_compute[192512]: 2025-10-13 16:01:07.920 2 DEBUG nova.compute.manager [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp8czd5q66',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Oct 13 12:01:09 np0005485008 nova_compute[192512]: 2025-10-13 16:01:09.796 2 DEBUG oslo_concurrency.processutils [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:01:09 np0005485008 nova_compute[192512]: 2025-10-13 16:01:09.863 2 DEBUG oslo_concurrency.processutils [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:01:09 np0005485008 nova_compute[192512]: 2025-10-13 16:01:09.864 2 DEBUG oslo_concurrency.processutils [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:01:09 np0005485008 nova_compute[192512]: 2025-10-13 16:01:09.929 2 DEBUG oslo_concurrency.processutils [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:01:10 np0005485008 podman[222131]: 2025-10-13 16:01:10.775895188 +0000 UTC m=+0.066152388 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, name=ubi9-minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 13 12:01:11 np0005485008 nova_compute[192512]: 2025-10-13 16:01:11.455 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:01:11 np0005485008 nova_compute[192512]: 2025-10-13 16:01:11.456 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:01:11 np0005485008 nova_compute[192512]: 2025-10-13 16:01:11.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:12 np0005485008 nova_compute[192512]: 2025-10-13 16:01:12.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:13 np0005485008 systemd-logind[784]: New session 36 of user nova.
Oct 13 12:01:13 np0005485008 systemd[1]: Created slice User Slice of UID 42436.
Oct 13 12:01:13 np0005485008 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct 13 12:01:13 np0005485008 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct 13 12:01:13 np0005485008 systemd[1]: Starting User Manager for UID 42436...
Oct 13 12:01:13 np0005485008 systemd[222157]: Queued start job for default target Main User Target.
Oct 13 12:01:13 np0005485008 systemd[222157]: Created slice User Application Slice.
Oct 13 12:01:13 np0005485008 systemd[222157]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 13 12:01:13 np0005485008 systemd[222157]: Started Daily Cleanup of User's Temporary Directories.
Oct 13 12:01:13 np0005485008 systemd[222157]: Reached target Paths.
Oct 13 12:01:13 np0005485008 systemd[222157]: Reached target Timers.
Oct 13 12:01:13 np0005485008 systemd[222157]: Starting D-Bus User Message Bus Socket...
Oct 13 12:01:13 np0005485008 systemd[222157]: Starting Create User's Volatile Files and Directories...
Oct 13 12:01:13 np0005485008 systemd[222157]: Finished Create User's Volatile Files and Directories.
Oct 13 12:01:13 np0005485008 systemd[222157]: Listening on D-Bus User Message Bus Socket.
Oct 13 12:01:13 np0005485008 systemd[222157]: Reached target Sockets.
Oct 13 12:01:13 np0005485008 systemd[222157]: Reached target Basic System.
Oct 13 12:01:13 np0005485008 systemd[222157]: Reached target Main User Target.
Oct 13 12:01:13 np0005485008 systemd[222157]: Startup finished in 148ms.
Oct 13 12:01:13 np0005485008 systemd[1]: Started User Manager for UID 42436.
Oct 13 12:01:13 np0005485008 systemd[1]: Started Session 36 of User nova.
Oct 13 12:01:13 np0005485008 systemd[1]: session-36.scope: Deactivated successfully.
Oct 13 12:01:13 np0005485008 systemd-logind[784]: Session 36 logged out. Waiting for processes to exit.
Oct 13 12:01:13 np0005485008 systemd-logind[784]: Removed session 36.
Oct 13 12:01:16 np0005485008 nova_compute[192512]: 2025-10-13 16:01:16.158 2 DEBUG nova.compute.manager [req-b87a053c-72bf-43f5-a038-061a590070cb req-d63d64ba-03d9-4b78-bddd-48a5bdc984ee 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Received event network-vif-unplugged-50291ee1-d049-4960-a775-5ebfbc422214 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:01:16 np0005485008 nova_compute[192512]: 2025-10-13 16:01:16.160 2 DEBUG oslo_concurrency.lockutils [req-b87a053c-72bf-43f5-a038-061a590070cb req-d63d64ba-03d9-4b78-bddd-48a5bdc984ee 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:01:16 np0005485008 nova_compute[192512]: 2025-10-13 16:01:16.161 2 DEBUG oslo_concurrency.lockutils [req-b87a053c-72bf-43f5-a038-061a590070cb req-d63d64ba-03d9-4b78-bddd-48a5bdc984ee 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:01:16 np0005485008 nova_compute[192512]: 2025-10-13 16:01:16.161 2 DEBUG oslo_concurrency.lockutils [req-b87a053c-72bf-43f5-a038-061a590070cb req-d63d64ba-03d9-4b78-bddd-48a5bdc984ee 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:01:16 np0005485008 nova_compute[192512]: 2025-10-13 16:01:16.162 2 DEBUG nova.compute.manager [req-b87a053c-72bf-43f5-a038-061a590070cb req-d63d64ba-03d9-4b78-bddd-48a5bdc984ee 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] No waiting events found dispatching network-vif-unplugged-50291ee1-d049-4960-a775-5ebfbc422214 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:01:16 np0005485008 nova_compute[192512]: 2025-10-13 16:01:16.162 2 DEBUG nova.compute.manager [req-b87a053c-72bf-43f5-a038-061a590070cb req-d63d64ba-03d9-4b78-bddd-48a5bdc984ee 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Received event network-vif-unplugged-50291ee1-d049-4960-a775-5ebfbc422214 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 12:01:16 np0005485008 nova_compute[192512]: 2025-10-13 16:01:16.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:17 np0005485008 nova_compute[192512]: 2025-10-13 16:01:17.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:17 np0005485008 nova_compute[192512]: 2025-10-13 16:01:17.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:01:17 np0005485008 nova_compute[192512]: 2025-10-13 16:01:17.893 2 INFO nova.compute.manager [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Took 7.96 seconds for pre_live_migration on destination host compute-0.ctlplane.example.com.#033[00m
Oct 13 12:01:17 np0005485008 nova_compute[192512]: 2025-10-13 16:01:17.893 2 DEBUG nova.compute.manager [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 13 12:01:17 np0005485008 nova_compute[192512]: 2025-10-13 16:01:17.974 2 DEBUG nova.compute.manager [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp8czd5q66',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(8bec8019-244e-44e9-ac16-dab1e042d638),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Oct 13 12:01:18 np0005485008 nova_compute[192512]: 2025-10-13 16:01:18.185 2 DEBUG nova.objects.instance [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'migration_context' on Instance uuid 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:01:18 np0005485008 nova_compute[192512]: 2025-10-13 16:01:18.187 2 DEBUG nova.virt.libvirt.driver [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Oct 13 12:01:18 np0005485008 nova_compute[192512]: 2025-10-13 16:01:18.191 2 DEBUG nova.virt.libvirt.driver [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Oct 13 12:01:18 np0005485008 nova_compute[192512]: 2025-10-13 16:01:18.191 2 DEBUG nova.virt.libvirt.driver [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Oct 13 12:01:18 np0005485008 nova_compute[192512]: 2025-10-13 16:01:18.295 2 DEBUG nova.virt.libvirt.vif [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T16:00:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-311358289',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-311358289',id=18,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T16:00:24Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-8c1u80pb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T16:00:24Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "50291ee1-d049-4960-a775-5ebfbc422214", "address": "fa:16:3e:33:e8:6d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap50291ee1-d0", "ovs_interfaceid": "50291ee1-d049-4960-a775-5ebfbc422214", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 13 12:01:18 np0005485008 nova_compute[192512]: 2025-10-13 16:01:18.296 2 DEBUG nova.network.os_vif_util [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converting VIF {"id": "50291ee1-d049-4960-a775-5ebfbc422214", "address": "fa:16:3e:33:e8:6d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap50291ee1-d0", "ovs_interfaceid": "50291ee1-d049-4960-a775-5ebfbc422214", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:01:18 np0005485008 nova_compute[192512]: 2025-10-13 16:01:18.296 2 DEBUG nova.network.os_vif_util [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:e8:6d,bridge_name='br-int',has_traffic_filtering=True,id=50291ee1-d049-4960-a775-5ebfbc422214,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50291ee1-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:01:18 np0005485008 nova_compute[192512]: 2025-10-13 16:01:18.297 2 DEBUG nova.virt.libvirt.migration [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Updating guest XML with vif config: <interface type="ethernet">
Oct 13 12:01:18 np0005485008 nova_compute[192512]:  <mac address="fa:16:3e:33:e8:6d"/>
Oct 13 12:01:18 np0005485008 nova_compute[192512]:  <model type="virtio"/>
Oct 13 12:01:18 np0005485008 nova_compute[192512]:  <driver name="vhost" rx_queue_size="512"/>
Oct 13 12:01:18 np0005485008 nova_compute[192512]:  <mtu size="1442"/>
Oct 13 12:01:18 np0005485008 nova_compute[192512]:  <target dev="tap50291ee1-d0"/>
Oct 13 12:01:18 np0005485008 nova_compute[192512]: </interface>
Oct 13 12:01:18 np0005485008 nova_compute[192512]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Oct 13 12:01:18 np0005485008 nova_compute[192512]: 2025-10-13 16:01:18.297 2 DEBUG nova.virt.libvirt.driver [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Oct 13 12:01:18 np0005485008 nova_compute[192512]: 2025-10-13 16:01:18.325 2 DEBUG nova.compute.manager [req-cc73930b-e23d-4d15-b765-1f660c76a3c5 req-cb7728e2-83bd-4813-be28-1f02f1f7e319 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Received event network-vif-plugged-50291ee1-d049-4960-a775-5ebfbc422214 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:01:18 np0005485008 nova_compute[192512]: 2025-10-13 16:01:18.326 2 DEBUG oslo_concurrency.lockutils [req-cc73930b-e23d-4d15-b765-1f660c76a3c5 req-cb7728e2-83bd-4813-be28-1f02f1f7e319 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:01:18 np0005485008 nova_compute[192512]: 2025-10-13 16:01:18.326 2 DEBUG oslo_concurrency.lockutils [req-cc73930b-e23d-4d15-b765-1f660c76a3c5 req-cb7728e2-83bd-4813-be28-1f02f1f7e319 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:01:18 np0005485008 nova_compute[192512]: 2025-10-13 16:01:18.326 2 DEBUG oslo_concurrency.lockutils [req-cc73930b-e23d-4d15-b765-1f660c76a3c5 req-cb7728e2-83bd-4813-be28-1f02f1f7e319 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:01:18 np0005485008 nova_compute[192512]: 2025-10-13 16:01:18.326 2 DEBUG nova.compute.manager [req-cc73930b-e23d-4d15-b765-1f660c76a3c5 req-cb7728e2-83bd-4813-be28-1f02f1f7e319 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] No waiting events found dispatching network-vif-plugged-50291ee1-d049-4960-a775-5ebfbc422214 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:01:18 np0005485008 nova_compute[192512]: 2025-10-13 16:01:18.326 2 WARNING nova.compute.manager [req-cc73930b-e23d-4d15-b765-1f660c76a3c5 req-cb7728e2-83bd-4813-be28-1f02f1f7e319 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Received unexpected event network-vif-plugged-50291ee1-d049-4960-a775-5ebfbc422214 for instance with vm_state active and task_state migrating.#033[00m
Oct 13 12:01:18 np0005485008 nova_compute[192512]: 2025-10-13 16:01:18.327 2 DEBUG nova.compute.manager [req-cc73930b-e23d-4d15-b765-1f660c76a3c5 req-cb7728e2-83bd-4813-be28-1f02f1f7e319 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Received event network-changed-50291ee1-d049-4960-a775-5ebfbc422214 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:01:18 np0005485008 nova_compute[192512]: 2025-10-13 16:01:18.327 2 DEBUG nova.compute.manager [req-cc73930b-e23d-4d15-b765-1f660c76a3c5 req-cb7728e2-83bd-4813-be28-1f02f1f7e319 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Refreshing instance network info cache due to event network-changed-50291ee1-d049-4960-a775-5ebfbc422214. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 13 12:01:18 np0005485008 nova_compute[192512]: 2025-10-13 16:01:18.327 2 DEBUG oslo_concurrency.lockutils [req-cc73930b-e23d-4d15-b765-1f660c76a3c5 req-cb7728e2-83bd-4813-be28-1f02f1f7e319 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:01:18 np0005485008 nova_compute[192512]: 2025-10-13 16:01:18.327 2 DEBUG oslo_concurrency.lockutils [req-cc73930b-e23d-4d15-b765-1f660c76a3c5 req-cb7728e2-83bd-4813-be28-1f02f1f7e319 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:01:18 np0005485008 nova_compute[192512]: 2025-10-13 16:01:18.327 2 DEBUG nova.network.neutron [req-cc73930b-e23d-4d15-b765-1f660c76a3c5 req-cb7728e2-83bd-4813-be28-1f02f1f7e319 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Refreshing network info cache for port 50291ee1-d049-4960-a775-5ebfbc422214 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 13 12:01:18 np0005485008 nova_compute[192512]: 2025-10-13 16:01:18.422 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:01:18 np0005485008 nova_compute[192512]: 2025-10-13 16:01:18.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:01:18 np0005485008 nova_compute[192512]: 2025-10-13 16:01:18.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:01:18 np0005485008 nova_compute[192512]: 2025-10-13 16:01:18.695 2 DEBUG nova.virt.libvirt.migration [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Current None elapsed 0 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct 13 12:01:18 np0005485008 nova_compute[192512]: 2025-10-13 16:01:18.696 2 INFO nova.virt.libvirt.migration [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Oct 13 12:01:19 np0005485008 nova_compute[192512]: 2025-10-13 16:01:19.015 2 INFO nova.virt.libvirt.driver [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Oct 13 12:01:19 np0005485008 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 13 12:01:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:01:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:01:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:01:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:01:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:01:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:01:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:01:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:01:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:01:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:01:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:01:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:01:19 np0005485008 nova_compute[192512]: 2025-10-13 16:01:19.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:01:19 np0005485008 nova_compute[192512]: 2025-10-13 16:01:19.530 2 DEBUG nova.virt.libvirt.migration [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Current 50 elapsed 1 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct 13 12:01:19 np0005485008 nova_compute[192512]: 2025-10-13 16:01:19.530 2 DEBUG nova.virt.libvirt.migration [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct 13 12:01:19 np0005485008 nova_compute[192512]: 2025-10-13 16:01:19.958 2 DEBUG nova.network.neutron [req-cc73930b-e23d-4d15-b765-1f660c76a3c5 req-cb7728e2-83bd-4813-be28-1f02f1f7e319 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Updated VIF entry in instance network info cache for port 50291ee1-d049-4960-a775-5ebfbc422214. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 13 12:01:19 np0005485008 nova_compute[192512]: 2025-10-13 16:01:19.959 2 DEBUG nova.network.neutron [req-cc73930b-e23d-4d15-b765-1f660c76a3c5 req-cb7728e2-83bd-4813-be28-1f02f1f7e319 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Updating instance_info_cache with network_info: [{"id": "50291ee1-d049-4960-a775-5ebfbc422214", "address": "fa:16:3e:33:e8:6d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50291ee1-d0", "ovs_interfaceid": "50291ee1-d049-4960-a775-5ebfbc422214", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-0.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.041 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760371280.0410914, 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.042 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] VM Paused (Lifecycle Event)#033[00m
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.044 2 DEBUG nova.virt.libvirt.migration [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Current 50 elapsed 1 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.044 2 DEBUG nova.virt.libvirt.migration [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct 13 12:01:20 np0005485008 kernel: tap50291ee1-d0 (unregistering): left promiscuous mode
Oct 13 12:01:20 np0005485008 NetworkManager[51587]: <info>  [1760371280.1743] device (tap50291ee1-d0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 12:01:20 np0005485008 ovn_controller[94758]: 2025-10-13T16:01:20Z|00201|binding|INFO|Releasing lport 50291ee1-d049-4960-a775-5ebfbc422214 from this chassis (sb_readonly=0)
Oct 13 12:01:20 np0005485008 ovn_controller[94758]: 2025-10-13T16:01:20Z|00202|binding|INFO|Setting lport 50291ee1-d049-4960-a775-5ebfbc422214 down in Southbound
Oct 13 12:01:20 np0005485008 ovn_controller[94758]: 2025-10-13T16:01:20Z|00203|binding|INFO|Removing iface tap50291ee1-d0 ovn-installed in OVS
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.196 2 DEBUG oslo_concurrency.lockutils [req-cc73930b-e23d-4d15-b765-1f660c76a3c5 req-cb7728e2-83bd-4813-be28-1f02f1f7e319 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:20 np0005485008 ovn_controller[94758]: 2025-10-13T16:01:20Z|00204|binding|INFO|Releasing lport 5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182 from this chassis (sb_readonly=0)
Oct 13 12:01:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:20.217 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:e8:6d 10.100.0.14'], port_security=['fa:16:3e:33:e8:6d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '6236ce6f-4317-42ce-8c52-bcd579c0494a'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=50291ee1-d049-4960-a775-5ebfbc422214) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:01:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:20.218 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 50291ee1-d049-4960-a775-5ebfbc422214 in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae unbound from our chassis#033[00m
Oct 13 12:01:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:20.219 103642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 13 12:01:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:20.220 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[706982cd-8032-4223-bc6f-53b9a72ea42f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:01:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:20.221 103642 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae namespace which is not needed anymore#033[00m
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.221 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:20 np0005485008 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000012.scope: Deactivated successfully.
Oct 13 12:01:20 np0005485008 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000012.scope: Consumed 14.939s CPU time.
Oct 13 12:01:20 np0005485008 systemd-machined[152551]: Machine qemu-16-instance-00000012 terminated.
Oct 13 12:01:20 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[221857]: [NOTICE]   (221861) : haproxy version is 2.8.14-c23fe91
Oct 13 12:01:20 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[221857]: [NOTICE]   (221861) : path to executable is /usr/sbin/haproxy
Oct 13 12:01:20 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[221857]: [WARNING]  (221861) : Exiting Master process...
Oct 13 12:01:20 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[221857]: [ALERT]    (221861) : Current worker (221863) exited with code 143 (Terminated)
Oct 13 12:01:20 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[221857]: [WARNING]  (221861) : All workers exited. Exiting... (0)
Oct 13 12:01:20 np0005485008 systemd[1]: libpod-578e26223d437cf1c6527281e3ab75160bcc25bd850d47aafd765f53ac731c4f.scope: Deactivated successfully.
Oct 13 12:01:20 np0005485008 podman[222215]: 2025-10-13 16:01:20.356150412 +0000 UTC m=+0.041731992 container died 578e26223d437cf1c6527281e3ab75160bcc25bd850d47aafd765f53ac731c4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 13 12:01:20 np0005485008 kernel: tap50291ee1-d0: entered promiscuous mode
Oct 13 12:01:20 np0005485008 NetworkManager[51587]: <info>  [1760371280.3722] manager: (tap50291ee1-d0): new Tun device (/org/freedesktop/NetworkManager/Devices/75)
Oct 13 12:01:20 np0005485008 systemd-udevd[222195]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 12:01:20 np0005485008 ovn_controller[94758]: 2025-10-13T16:01:20Z|00205|binding|INFO|Claiming lport 50291ee1-d049-4960-a775-5ebfbc422214 for this chassis.
Oct 13 12:01:20 np0005485008 ovn_controller[94758]: 2025-10-13T16:01:20Z|00206|binding|INFO|50291ee1-d049-4960-a775-5ebfbc422214: Claiming fa:16:3e:33:e8:6d 10.100.0.14
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:20 np0005485008 kernel: tap50291ee1-d0 (unregistering): left promiscuous mode
Oct 13 12:01:20 np0005485008 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-578e26223d437cf1c6527281e3ab75160bcc25bd850d47aafd765f53ac731c4f-userdata-shm.mount: Deactivated successfully.
Oct 13 12:01:20 np0005485008 systemd[1]: var-lib-containers-storage-overlay-81c928b5b67828a563c2b048e8c46fbc765eed72c10dbceb5b65e2ea65080b49-merged.mount: Deactivated successfully.
Oct 13 12:01:20 np0005485008 podman[222215]: 2025-10-13 16:01:20.414687851 +0000 UTC m=+0.100269431 container cleanup 578e26223d437cf1c6527281e3ab75160bcc25bd850d47aafd765f53ac731c4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 12:01:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:20.418 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:e8:6d 10.100.0.14'], port_security=['fa:16:3e:33:e8:6d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '6236ce6f-4317-42ce-8c52-bcd579c0494a'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=50291ee1-d049-4960-a775-5ebfbc422214) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:01:20 np0005485008 systemd[1]: libpod-conmon-578e26223d437cf1c6527281e3ab75160bcc25bd850d47aafd765f53ac731c4f.scope: Deactivated successfully.
Oct 13 12:01:20 np0005485008 ovn_controller[94758]: 2025-10-13T16:01:20Z|00207|binding|INFO|Releasing lport 50291ee1-d049-4960-a775-5ebfbc422214 from this chassis (sb_readonly=0)
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.443 2 DEBUG nova.virt.libvirt.driver [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.443 2 DEBUG nova.virt.libvirt.driver [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.443 2 DEBUG nova.virt.libvirt.driver [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Oct 13 12:01:20 np0005485008 podman[222249]: 2025-10-13 16:01:20.480489228 +0000 UTC m=+0.041398472 container remove 578e26223d437cf1c6527281e3ab75160bcc25bd850d47aafd765f53ac731c4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 12:01:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:20.486 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[7d72b0cd-9167-44ac-a9fa-a123f0fbe6e4]: (4, ('Mon Oct 13 04:01:20 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae (578e26223d437cf1c6527281e3ab75160bcc25bd850d47aafd765f53ac731c4f)\n578e26223d437cf1c6527281e3ab75160bcc25bd850d47aafd765f53ac731c4f\nMon Oct 13 04:01:20 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae (578e26223d437cf1c6527281e3ab75160bcc25bd850d47aafd765f53ac731c4f)\n578e26223d437cf1c6527281e3ab75160bcc25bd850d47aafd765f53ac731c4f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:01:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:20.488 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[97061d32-9f19-49de-98dc-66fd099c7194]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:01:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:20.489 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:20 np0005485008 kernel: tap39a43da9-c0: left promiscuous mode
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:20.507 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[58461ed6-219d-4638-9e03-2decf33e1441]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:01:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:20.521 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:e8:6d 10.100.0.14'], port_security=['fa:16:3e:33:e8:6d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '6236ce6f-4317-42ce-8c52-bcd579c0494a'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=50291ee1-d049-4960-a775-5ebfbc422214) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:01:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:20.536 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[ae22a040-9995-4e1a-8097-8c92fb4db660]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:01:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:20.538 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[0254b4ee-ab4a-4ec1-aa5e-901074f16ad1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.547 2 DEBUG nova.virt.libvirt.guest [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f' (instance-00000012) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.549 2 INFO nova.virt.libvirt.driver [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Migration operation has completed#033[00m
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.549 2 INFO nova.compute.manager [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] _post_live_migration() is started..#033[00m
Oct 13 12:01:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:20.554 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[86754a1a-1da5-49e5-bd69-0644c755d6f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476090, 'reachable_time': 41805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 222272, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:01:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:20.557 103757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 13 12:01:20 np0005485008 systemd[1]: run-netns-ovnmeta\x2d39a43da9\x2dcf4c\x2d4fe3\x2dab73\x2dbf8705320dae.mount: Deactivated successfully.
Oct 13 12:01:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:20.557 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[dd9f763c-dd17-41b6-97aa-634093bd61fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:01:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:20.558 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 50291ee1-d049-4960-a775-5ebfbc422214 in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae unbound from our chassis#033[00m
Oct 13 12:01:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:20.560 103642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 13 12:01:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:20.560 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[338b3243-3281-475a-8127-42c8b5037088]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:01:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:20.561 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 50291ee1-d049-4960-a775-5ebfbc422214 in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae unbound from our chassis#033[00m
Oct 13 12:01:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:20.562 103642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 13 12:01:20 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:20.562 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[7e92459f-a6f6-47e6-9c07-f945866f8f95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.641 2 DEBUG nova.compute.manager [req-a237ec78-4800-4121-80b4-216ba016171a req-4f99b120-23be-4d6d-ad20-dbd7d5aa66dc 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Received event network-vif-unplugged-50291ee1-d049-4960-a775-5ebfbc422214 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.642 2 DEBUG oslo_concurrency.lockutils [req-a237ec78-4800-4121-80b4-216ba016171a req-4f99b120-23be-4d6d-ad20-dbd7d5aa66dc 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.642 2 DEBUG oslo_concurrency.lockutils [req-a237ec78-4800-4121-80b4-216ba016171a req-4f99b120-23be-4d6d-ad20-dbd7d5aa66dc 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.642 2 DEBUG oslo_concurrency.lockutils [req-a237ec78-4800-4121-80b4-216ba016171a req-4f99b120-23be-4d6d-ad20-dbd7d5aa66dc 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.643 2 DEBUG nova.compute.manager [req-a237ec78-4800-4121-80b4-216ba016171a req-4f99b120-23be-4d6d-ad20-dbd7d5aa66dc 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] No waiting events found dispatching network-vif-unplugged-50291ee1-d049-4960-a775-5ebfbc422214 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:01:20 np0005485008 nova_compute[192512]: 2025-10-13 16:01:20.643 2 DEBUG nova.compute.manager [req-a237ec78-4800-4121-80b4-216ba016171a req-4f99b120-23be-4d6d-ad20-dbd7d5aa66dc 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Received event network-vif-unplugged-50291ee1-d049-4960-a775-5ebfbc422214 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 12:01:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:21.423 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:01:21 np0005485008 nova_compute[192512]: 2025-10-13 16:01:21.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:21.426 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 12:01:21 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:21.426 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:01:21 np0005485008 nova_compute[192512]: 2025-10-13 16:01:21.487 2 DEBUG nova.compute.manager [req-cabae4c4-bbee-4edf-bdb0-edd851438907 req-a9e1865d-42f3-4f3e-8f92-4a4658c3a464 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Received event network-vif-unplugged-50291ee1-d049-4960-a775-5ebfbc422214 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:01:21 np0005485008 nova_compute[192512]: 2025-10-13 16:01:21.488 2 DEBUG oslo_concurrency.lockutils [req-cabae4c4-bbee-4edf-bdb0-edd851438907 req-a9e1865d-42f3-4f3e-8f92-4a4658c3a464 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:01:21 np0005485008 nova_compute[192512]: 2025-10-13 16:01:21.489 2 DEBUG oslo_concurrency.lockutils [req-cabae4c4-bbee-4edf-bdb0-edd851438907 req-a9e1865d-42f3-4f3e-8f92-4a4658c3a464 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:01:21 np0005485008 nova_compute[192512]: 2025-10-13 16:01:21.489 2 DEBUG oslo_concurrency.lockutils [req-cabae4c4-bbee-4edf-bdb0-edd851438907 req-a9e1865d-42f3-4f3e-8f92-4a4658c3a464 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:01:21 np0005485008 nova_compute[192512]: 2025-10-13 16:01:21.489 2 DEBUG nova.compute.manager [req-cabae4c4-bbee-4edf-bdb0-edd851438907 req-a9e1865d-42f3-4f3e-8f92-4a4658c3a464 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] No waiting events found dispatching network-vif-unplugged-50291ee1-d049-4960-a775-5ebfbc422214 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:01:21 np0005485008 nova_compute[192512]: 2025-10-13 16:01:21.490 2 DEBUG nova.compute.manager [req-cabae4c4-bbee-4edf-bdb0-edd851438907 req-a9e1865d-42f3-4f3e-8f92-4a4658c3a464 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Received event network-vif-unplugged-50291ee1-d049-4960-a775-5ebfbc422214 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 12:01:21 np0005485008 nova_compute[192512]: 2025-10-13 16:01:21.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.190 2 DEBUG nova.network.neutron [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Activated binding for port 50291ee1-d049-4960-a775-5ebfbc422214 and host compute-0.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.191 2 DEBUG nova.compute.manager [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "50291ee1-d049-4960-a775-5ebfbc422214", "address": "fa:16:3e:33:e8:6d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50291ee1-d0", "ovs_interfaceid": "50291ee1-d049-4960-a775-5ebfbc422214", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.192 2 DEBUG nova.virt.libvirt.vif [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T16:00:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-311358289',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-311358289',id=18,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T16:00:24Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-8c1u80pb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T16:01:05Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "50291ee1-d049-4960-a775-5ebfbc422214", "address": "fa:16:3e:33:e8:6d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50291ee1-d0", "ovs_interfaceid": "50291ee1-d049-4960-a775-5ebfbc422214", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.192 2 DEBUG nova.network.os_vif_util [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converting VIF {"id": "50291ee1-d049-4960-a775-5ebfbc422214", "address": "fa:16:3e:33:e8:6d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50291ee1-d0", "ovs_interfaceid": "50291ee1-d049-4960-a775-5ebfbc422214", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.193 2 DEBUG nova.network.os_vif_util [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:e8:6d,bridge_name='br-int',has_traffic_filtering=True,id=50291ee1-d049-4960-a775-5ebfbc422214,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50291ee1-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.193 2 DEBUG os_vif [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:e8:6d,bridge_name='br-int',has_traffic_filtering=True,id=50291ee1-d049-4960-a775-5ebfbc422214,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50291ee1-d0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.196 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50291ee1-d0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.201 2 INFO os_vif [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:e8:6d,bridge_name='br-int',has_traffic_filtering=True,id=50291ee1-d049-4960-a775-5ebfbc422214,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50291ee1-d0')#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.202 2 DEBUG oslo_concurrency.lockutils [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.202 2 DEBUG oslo_concurrency.lockutils [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.203 2 DEBUG oslo_concurrency.lockutils [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.203 2 DEBUG nova.compute.manager [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.203 2 INFO nova.virt.libvirt.driver [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Deleting instance files /var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f_del#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.204 2 INFO nova.virt.libvirt.driver [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Deletion of /var/lib/nova/instances/3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f_del complete#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.472 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.472 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.473 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.473 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.642 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.644 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5853MB free_disk=73.46557235717773GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.644 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.644 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.714 2 INFO nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Updating resource usage from migration 8bec8019-244e-44e9-ac16-dab1e042d638#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.745 2 DEBUG nova.compute.manager [req-83c2148d-7b0d-4337-81ff-bef099a6fe3b req-9331c974-6b62-49d1-a612-80d97cc79461 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Received event network-vif-plugged-50291ee1-d049-4960-a775-5ebfbc422214 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.746 2 DEBUG oslo_concurrency.lockutils [req-83c2148d-7b0d-4337-81ff-bef099a6fe3b req-9331c974-6b62-49d1-a612-80d97cc79461 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.746 2 DEBUG oslo_concurrency.lockutils [req-83c2148d-7b0d-4337-81ff-bef099a6fe3b req-9331c974-6b62-49d1-a612-80d97cc79461 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.747 2 DEBUG oslo_concurrency.lockutils [req-83c2148d-7b0d-4337-81ff-bef099a6fe3b req-9331c974-6b62-49d1-a612-80d97cc79461 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.747 2 DEBUG nova.compute.manager [req-83c2148d-7b0d-4337-81ff-bef099a6fe3b req-9331c974-6b62-49d1-a612-80d97cc79461 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] No waiting events found dispatching network-vif-plugged-50291ee1-d049-4960-a775-5ebfbc422214 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.747 2 WARNING nova.compute.manager [req-83c2148d-7b0d-4337-81ff-bef099a6fe3b req-9331c974-6b62-49d1-a612-80d97cc79461 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Received unexpected event network-vif-plugged-50291ee1-d049-4960-a775-5ebfbc422214 for instance with vm_state active and task_state migrating.#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.748 2 DEBUG nova.compute.manager [req-83c2148d-7b0d-4337-81ff-bef099a6fe3b req-9331c974-6b62-49d1-a612-80d97cc79461 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Received event network-vif-plugged-50291ee1-d049-4960-a775-5ebfbc422214 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.748 2 DEBUG oslo_concurrency.lockutils [req-83c2148d-7b0d-4337-81ff-bef099a6fe3b req-9331c974-6b62-49d1-a612-80d97cc79461 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.748 2 DEBUG oslo_concurrency.lockutils [req-83c2148d-7b0d-4337-81ff-bef099a6fe3b req-9331c974-6b62-49d1-a612-80d97cc79461 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.748 2 DEBUG oslo_concurrency.lockutils [req-83c2148d-7b0d-4337-81ff-bef099a6fe3b req-9331c974-6b62-49d1-a612-80d97cc79461 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.749 2 DEBUG nova.compute.manager [req-83c2148d-7b0d-4337-81ff-bef099a6fe3b req-9331c974-6b62-49d1-a612-80d97cc79461 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] No waiting events found dispatching network-vif-plugged-50291ee1-d049-4960-a775-5ebfbc422214 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.749 2 WARNING nova.compute.manager [req-83c2148d-7b0d-4337-81ff-bef099a6fe3b req-9331c974-6b62-49d1-a612-80d97cc79461 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Received unexpected event network-vif-plugged-50291ee1-d049-4960-a775-5ebfbc422214 for instance with vm_state active and task_state migrating.#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.749 2 DEBUG nova.compute.manager [req-83c2148d-7b0d-4337-81ff-bef099a6fe3b req-9331c974-6b62-49d1-a612-80d97cc79461 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Received event network-vif-plugged-50291ee1-d049-4960-a775-5ebfbc422214 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.749 2 DEBUG oslo_concurrency.lockutils [req-83c2148d-7b0d-4337-81ff-bef099a6fe3b req-9331c974-6b62-49d1-a612-80d97cc79461 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.749 2 DEBUG oslo_concurrency.lockutils [req-83c2148d-7b0d-4337-81ff-bef099a6fe3b req-9331c974-6b62-49d1-a612-80d97cc79461 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.750 2 DEBUG oslo_concurrency.lockutils [req-83c2148d-7b0d-4337-81ff-bef099a6fe3b req-9331c974-6b62-49d1-a612-80d97cc79461 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.750 2 DEBUG nova.compute.manager [req-83c2148d-7b0d-4337-81ff-bef099a6fe3b req-9331c974-6b62-49d1-a612-80d97cc79461 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] No waiting events found dispatching network-vif-plugged-50291ee1-d049-4960-a775-5ebfbc422214 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.750 2 WARNING nova.compute.manager [req-83c2148d-7b0d-4337-81ff-bef099a6fe3b req-9331c974-6b62-49d1-a612-80d97cc79461 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Received unexpected event network-vif-plugged-50291ee1-d049-4960-a775-5ebfbc422214 for instance with vm_state active and task_state migrating.#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.752 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Migration 8bec8019-244e-44e9-ac16-dab1e042d638 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.752 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.752 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.828 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.850 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.872 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:01:22 np0005485008 nova_compute[192512]: 2025-10-13 16:01:22.873 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:01:23 np0005485008 nova_compute[192512]: 2025-10-13 16:01:23.873 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:01:23 np0005485008 nova_compute[192512]: 2025-10-13 16:01:23.873 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:01:23 np0005485008 nova_compute[192512]: 2025-10-13 16:01:23.874 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:01:23 np0005485008 systemd[1]: Stopping User Manager for UID 42436...
Oct 13 12:01:23 np0005485008 systemd[222157]: Activating special unit Exit the Session...
Oct 13 12:01:23 np0005485008 systemd[222157]: Stopped target Main User Target.
Oct 13 12:01:23 np0005485008 systemd[222157]: Stopped target Basic System.
Oct 13 12:01:23 np0005485008 systemd[222157]: Stopped target Paths.
Oct 13 12:01:23 np0005485008 systemd[222157]: Stopped target Sockets.
Oct 13 12:01:23 np0005485008 systemd[222157]: Stopped target Timers.
Oct 13 12:01:23 np0005485008 systemd[222157]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 13 12:01:23 np0005485008 systemd[222157]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 13 12:01:23 np0005485008 systemd[222157]: Closed D-Bus User Message Bus Socket.
Oct 13 12:01:23 np0005485008 systemd[222157]: Stopped Create User's Volatile Files and Directories.
Oct 13 12:01:23 np0005485008 systemd[222157]: Removed slice User Application Slice.
Oct 13 12:01:23 np0005485008 systemd[222157]: Reached target Shutdown.
Oct 13 12:01:23 np0005485008 systemd[222157]: Finished Exit the Session.
Oct 13 12:01:23 np0005485008 systemd[222157]: Reached target Exit the Session.
Oct 13 12:01:23 np0005485008 systemd[1]: user@42436.service: Deactivated successfully.
Oct 13 12:01:23 np0005485008 systemd[1]: Stopped User Manager for UID 42436.
Oct 13 12:01:23 np0005485008 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct 13 12:01:23 np0005485008 nova_compute[192512]: 2025-10-13 16:01:23.958 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "refresh_cache-3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:01:23 np0005485008 nova_compute[192512]: 2025-10-13 16:01:23.958 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquired lock "refresh_cache-3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:01:23 np0005485008 nova_compute[192512]: 2025-10-13 16:01:23.959 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 13 12:01:23 np0005485008 nova_compute[192512]: 2025-10-13 16:01:23.959 2 DEBUG nova.objects.instance [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:01:23 np0005485008 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct 13 12:01:23 np0005485008 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct 13 12:01:23 np0005485008 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct 13 12:01:23 np0005485008 systemd[1]: Removed slice User Slice of UID 42436.
Oct 13 12:01:24 np0005485008 nova_compute[192512]: 2025-10-13 16:01:24.852 2 DEBUG nova.compute.manager [req-e363fca5-b742-4c04-aa68-2a96ee1eae19 req-d7d33c68-2321-4617-9332-01eec584310d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Received event network-vif-plugged-50291ee1-d049-4960-a775-5ebfbc422214 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:01:24 np0005485008 nova_compute[192512]: 2025-10-13 16:01:24.853 2 DEBUG oslo_concurrency.lockutils [req-e363fca5-b742-4c04-aa68-2a96ee1eae19 req-d7d33c68-2321-4617-9332-01eec584310d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:01:24 np0005485008 nova_compute[192512]: 2025-10-13 16:01:24.853 2 DEBUG oslo_concurrency.lockutils [req-e363fca5-b742-4c04-aa68-2a96ee1eae19 req-d7d33c68-2321-4617-9332-01eec584310d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:01:24 np0005485008 nova_compute[192512]: 2025-10-13 16:01:24.854 2 DEBUG oslo_concurrency.lockutils [req-e363fca5-b742-4c04-aa68-2a96ee1eae19 req-d7d33c68-2321-4617-9332-01eec584310d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:01:24 np0005485008 nova_compute[192512]: 2025-10-13 16:01:24.854 2 DEBUG nova.compute.manager [req-e363fca5-b742-4c04-aa68-2a96ee1eae19 req-d7d33c68-2321-4617-9332-01eec584310d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] No waiting events found dispatching network-vif-plugged-50291ee1-d049-4960-a775-5ebfbc422214 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:01:24 np0005485008 nova_compute[192512]: 2025-10-13 16:01:24.854 2 WARNING nova.compute.manager [req-e363fca5-b742-4c04-aa68-2a96ee1eae19 req-d7d33c68-2321-4617-9332-01eec584310d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Received unexpected event network-vif-plugged-50291ee1-d049-4960-a775-5ebfbc422214 for instance with vm_state active and task_state migrating.#033[00m
Oct 13 12:01:25 np0005485008 nova_compute[192512]: 2025-10-13 16:01:25.090 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Updating instance_info_cache with network_info: [{"id": "50291ee1-d049-4960-a775-5ebfbc422214", "address": "fa:16:3e:33:e8:6d", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50291ee1-d0", "ovs_interfaceid": "50291ee1-d049-4960-a775-5ebfbc422214", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:01:25 np0005485008 nova_compute[192512]: 2025-10-13 16:01:25.105 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Releasing lock "refresh_cache-3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:01:25 np0005485008 nova_compute[192512]: 2025-10-13 16:01:25.105 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 13 12:01:25 np0005485008 nova_compute[192512]: 2025-10-13 16:01:25.106 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:01:26 np0005485008 nova_compute[192512]: 2025-10-13 16:01:26.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:27 np0005485008 nova_compute[192512]: 2025-10-13 16:01:27.139 2 DEBUG oslo_concurrency.lockutils [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:01:27 np0005485008 nova_compute[192512]: 2025-10-13 16:01:27.140 2 DEBUG oslo_concurrency.lockutils [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:01:27 np0005485008 nova_compute[192512]: 2025-10-13 16:01:27.140 2 DEBUG oslo_concurrency.lockutils [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:01:27 np0005485008 nova_compute[192512]: 2025-10-13 16:01:27.171 2 DEBUG oslo_concurrency.lockutils [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:01:27 np0005485008 nova_compute[192512]: 2025-10-13 16:01:27.171 2 DEBUG oslo_concurrency.lockutils [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:01:27 np0005485008 nova_compute[192512]: 2025-10-13 16:01:27.172 2 DEBUG oslo_concurrency.lockutils [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:01:27 np0005485008 nova_compute[192512]: 2025-10-13 16:01:27.172 2 DEBUG nova.compute.resource_tracker [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:01:27 np0005485008 nova_compute[192512]: 2025-10-13 16:01:27.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:27 np0005485008 nova_compute[192512]: 2025-10-13 16:01:27.324 2 WARNING nova.virt.libvirt.driver [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:01:27 np0005485008 nova_compute[192512]: 2025-10-13 16:01:27.325 2 DEBUG nova.compute.resource_tracker [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5859MB free_disk=73.46561431884766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:01:27 np0005485008 nova_compute[192512]: 2025-10-13 16:01:27.326 2 DEBUG oslo_concurrency.lockutils [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:01:27 np0005485008 nova_compute[192512]: 2025-10-13 16:01:27.326 2 DEBUG oslo_concurrency.lockutils [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:01:27 np0005485008 nova_compute[192512]: 2025-10-13 16:01:27.385 2 DEBUG nova.compute.resource_tracker [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Migration for instance 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Oct 13 12:01:27 np0005485008 nova_compute[192512]: 2025-10-13 16:01:27.409 2 DEBUG nova.compute.resource_tracker [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Oct 13 12:01:27 np0005485008 nova_compute[192512]: 2025-10-13 16:01:27.437 2 DEBUG nova.compute.resource_tracker [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Migration 8bec8019-244e-44e9-ac16-dab1e042d638 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Oct 13 12:01:27 np0005485008 nova_compute[192512]: 2025-10-13 16:01:27.437 2 DEBUG nova.compute.resource_tracker [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:01:27 np0005485008 nova_compute[192512]: 2025-10-13 16:01:27.438 2 DEBUG nova.compute.resource_tracker [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:01:27 np0005485008 nova_compute[192512]: 2025-10-13 16:01:27.478 2 DEBUG nova.compute.provider_tree [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:01:27 np0005485008 nova_compute[192512]: 2025-10-13 16:01:27.494 2 DEBUG nova.scheduler.client.report [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:01:27 np0005485008 nova_compute[192512]: 2025-10-13 16:01:27.529 2 DEBUG nova.compute.resource_tracker [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:01:27 np0005485008 nova_compute[192512]: 2025-10-13 16:01:27.530 2 DEBUG oslo_concurrency.lockutils [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.204s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:01:27 np0005485008 nova_compute[192512]: 2025-10-13 16:01:27.536 2 INFO nova.compute.manager [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Migrating instance to compute-0.ctlplane.example.com finished successfully.#033[00m
Oct 13 12:01:27 np0005485008 nova_compute[192512]: 2025-10-13 16:01:27.640 2 INFO nova.scheduler.client.report [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Deleted allocation for migration 8bec8019-244e-44e9-ac16-dab1e042d638#033[00m
Oct 13 12:01:27 np0005485008 nova_compute[192512]: 2025-10-13 16:01:27.641 2 DEBUG nova.virt.libvirt.driver [None req-d1a22546-38f9-4ffb-824b-9cc6e1930772 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Oct 13 12:01:27 np0005485008 podman[222277]: 2025-10-13 16:01:27.765904289 +0000 UTC m=+0.066540771 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 12:01:27 np0005485008 podman[222278]: 2025-10-13 16:01:27.767408497 +0000 UTC m=+0.061231854 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 12:01:27 np0005485008 podman[222279]: 2025-10-13 16:01:27.768287504 +0000 UTC m=+0.057410724 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 12:01:27 np0005485008 podman[222276]: 2025-10-13 16:01:27.796380986 +0000 UTC m=+0.097396830 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 13 12:01:27 np0005485008 podman[222290]: 2025-10-13 16:01:27.798264356 +0000 UTC m=+0.085088034 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 12:01:30 np0005485008 nova_compute[192512]: 2025-10-13 16:01:30.656 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:01:31 np0005485008 nova_compute[192512]: 2025-10-13 16:01:31.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:32 np0005485008 nova_compute[192512]: 2025-10-13 16:01:32.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:33.966 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:01:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:33.966 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:01:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:01:33.966 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:01:35 np0005485008 nova_compute[192512]: 2025-10-13 16:01:35.443 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760371280.4413488, 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:01:35 np0005485008 nova_compute[192512]: 2025-10-13 16:01:35.443 2 INFO nova.compute.manager [-] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] VM Stopped (Lifecycle Event)#033[00m
Oct 13 12:01:35 np0005485008 nova_compute[192512]: 2025-10-13 16:01:35.523 2 DEBUG nova.compute.manager [None req-7f27d1fd-59e8-40a0-8790-c0d1c28f4c4f - - - - - -] [instance: 3ab0edce-63b8-4ee2-ae0a-99ae4f4c5b9f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:01:35 np0005485008 podman[202884]: time="2025-10-13T16:01:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:01:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:01:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:01:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:01:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3005 "" "Go-http-client/1.1"
Oct 13 12:01:36 np0005485008 nova_compute[192512]: 2025-10-13 16:01:36.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:37 np0005485008 nova_compute[192512]: 2025-10-13 16:01:37.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:41 np0005485008 nova_compute[192512]: 2025-10-13 16:01:41.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:41 np0005485008 podman[222376]: 2025-10-13 16:01:41.759613495 +0000 UTC m=+0.064836238 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, managed_by=edpm_ansible, version=9.6)
Oct 13 12:01:42 np0005485008 nova_compute[192512]: 2025-10-13 16:01:42.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:46 np0005485008 nova_compute[192512]: 2025-10-13 16:01:46.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:47 np0005485008 nova_compute[192512]: 2025-10-13 16:01:47.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:01:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:01:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:01:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:01:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:01:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:01:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:01:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:01:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:01:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:01:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:01:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:01:51 np0005485008 nova_compute[192512]: 2025-10-13 16:01:51.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:52 np0005485008 nova_compute[192512]: 2025-10-13 16:01:52.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:56 np0005485008 nova_compute[192512]: 2025-10-13 16:01:56.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:57 np0005485008 nova_compute[192512]: 2025-10-13 16:01:57.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:01:58 np0005485008 podman[222398]: 2025-10-13 16:01:58.767762304 +0000 UTC m=+0.067846842 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 13 12:01:58 np0005485008 podman[222399]: 2025-10-13 16:01:58.774939419 +0000 UTC m=+0.071009841 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid)
Oct 13 12:01:58 np0005485008 podman[222401]: 2025-10-13 16:01:58.780503014 +0000 UTC m=+0.066352995 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 12:01:58 np0005485008 podman[222400]: 2025-10-13 16:01:58.798653694 +0000 UTC m=+0.090021098 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 12:01:58 np0005485008 podman[222408]: 2025-10-13 16:01:58.806384937 +0000 UTC m=+0.091412712 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct 13 12:02:01 np0005485008 nova_compute[192512]: 2025-10-13 16:02:01.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:02 np0005485008 nova_compute[192512]: 2025-10-13 16:02:02.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:05 np0005485008 podman[202884]: time="2025-10-13T16:02:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:02:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:02:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:02:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:02:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3007 "" "Go-http-client/1.1"
Oct 13 12:02:06 np0005485008 nova_compute[192512]: 2025-10-13 16:02:06.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:07 np0005485008 nova_compute[192512]: 2025-10-13 16:02:07.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:11 np0005485008 nova_compute[192512]: 2025-10-13 16:02:11.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:12 np0005485008 nova_compute[192512]: 2025-10-13 16:02:12.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:12 np0005485008 podman[222502]: 2025-10-13 16:02:12.760016174 +0000 UTC m=+0.066426037 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=edpm, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 12:02:13 np0005485008 nova_compute[192512]: 2025-10-13 16:02:13.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:02:13 np0005485008 nova_compute[192512]: 2025-10-13 16:02:13.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:02:16 np0005485008 nova_compute[192512]: 2025-10-13 16:02:16.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:17 np0005485008 nova_compute[192512]: 2025-10-13 16:02:17.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:17 np0005485008 nova_compute[192512]: 2025-10-13 16:02:17.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:02:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:02:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:02:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:02:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:02:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:02:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:02:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:02:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:02:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:02:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:02:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:02:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:02:19 np0005485008 nova_compute[192512]: 2025-10-13 16:02:19.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:02:20 np0005485008 nova_compute[192512]: 2025-10-13 16:02:20.422 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:02:20 np0005485008 nova_compute[192512]: 2025-10-13 16:02:20.426 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:02:21 np0005485008 nova_compute[192512]: 2025-10-13 16:02:21.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:02:21 np0005485008 nova_compute[192512]: 2025-10-13 16:02:21.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:22 np0005485008 nova_compute[192512]: 2025-10-13 16:02:22.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:23 np0005485008 nova_compute[192512]: 2025-10-13 16:02:23.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:02:23 np0005485008 nova_compute[192512]: 2025-10-13 16:02:23.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:02:23 np0005485008 nova_compute[192512]: 2025-10-13 16:02:23.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:02:23 np0005485008 nova_compute[192512]: 2025-10-13 16:02:23.455 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:02:23 np0005485008 nova_compute[192512]: 2025-10-13 16:02:23.456 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:02:23 np0005485008 nova_compute[192512]: 2025-10-13 16:02:23.506 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:02:23 np0005485008 nova_compute[192512]: 2025-10-13 16:02:23.507 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:02:23 np0005485008 nova_compute[192512]: 2025-10-13 16:02:23.507 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:02:23 np0005485008 nova_compute[192512]: 2025-10-13 16:02:23.507 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:02:23 np0005485008 nova_compute[192512]: 2025-10-13 16:02:23.679 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:02:23 np0005485008 nova_compute[192512]: 2025-10-13 16:02:23.680 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5874MB free_disk=73.46572875976562GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:02:23 np0005485008 nova_compute[192512]: 2025-10-13 16:02:23.680 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:02:23 np0005485008 nova_compute[192512]: 2025-10-13 16:02:23.680 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:02:23 np0005485008 nova_compute[192512]: 2025-10-13 16:02:23.774 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:02:23 np0005485008 nova_compute[192512]: 2025-10-13 16:02:23.775 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:02:23 np0005485008 nova_compute[192512]: 2025-10-13 16:02:23.804 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:02:23 np0005485008 nova_compute[192512]: 2025-10-13 16:02:23.837 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:02:23 np0005485008 nova_compute[192512]: 2025-10-13 16:02:23.840 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:02:23 np0005485008 nova_compute[192512]: 2025-10-13 16:02:23.840 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:02:24 np0005485008 nova_compute[192512]: 2025-10-13 16:02:24.812 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:02:26 np0005485008 nova_compute[192512]: 2025-10-13 16:02:26.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:27 np0005485008 nova_compute[192512]: 2025-10-13 16:02:27.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:28 np0005485008 nova_compute[192512]: 2025-10-13 16:02:28.983 2 DEBUG oslo_concurrency.lockutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "932125ca-093f-4fd1-b20d-0da836590da3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:02:28 np0005485008 nova_compute[192512]: 2025-10-13 16:02:28.983 2 DEBUG oslo_concurrency.lockutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "932125ca-093f-4fd1-b20d-0da836590da3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:02:29 np0005485008 nova_compute[192512]: 2025-10-13 16:02:29.157 2 DEBUG nova.compute.manager [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 13 12:02:29 np0005485008 nova_compute[192512]: 2025-10-13 16:02:29.342 2 DEBUG oslo_concurrency.lockutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:02:29 np0005485008 nova_compute[192512]: 2025-10-13 16:02:29.343 2 DEBUG oslo_concurrency.lockutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:02:29 np0005485008 nova_compute[192512]: 2025-10-13 16:02:29.354 2 DEBUG nova.virt.hardware [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 13 12:02:29 np0005485008 nova_compute[192512]: 2025-10-13 16:02:29.354 2 INFO nova.compute.claims [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct 13 12:02:29 np0005485008 nova_compute[192512]: 2025-10-13 16:02:29.634 2 DEBUG nova.compute.provider_tree [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:02:29 np0005485008 nova_compute[192512]: 2025-10-13 16:02:29.679 2 DEBUG nova.scheduler.client.report [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:02:29 np0005485008 nova_compute[192512]: 2025-10-13 16:02:29.707 2 DEBUG oslo_concurrency.lockutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:02:29 np0005485008 nova_compute[192512]: 2025-10-13 16:02:29.708 2 DEBUG nova.compute.manager [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 13 12:02:29 np0005485008 podman[222525]: 2025-10-13 16:02:29.762732726 +0000 UTC m=+0.059574363 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 13 12:02:29 np0005485008 nova_compute[192512]: 2025-10-13 16:02:29.773 2 DEBUG nova.compute.manager [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 13 12:02:29 np0005485008 nova_compute[192512]: 2025-10-13 16:02:29.773 2 DEBUG nova.network.neutron [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 13 12:02:29 np0005485008 podman[222524]: 2025-10-13 16:02:29.780455232 +0000 UTC m=+0.078161745 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=multipathd)
Oct 13 12:02:29 np0005485008 podman[222527]: 2025-10-13 16:02:29.78068673 +0000 UTC m=+0.067940975 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 12:02:29 np0005485008 podman[222526]: 2025-10-13 16:02:29.782781175 +0000 UTC m=+0.072740925 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 12:02:29 np0005485008 nova_compute[192512]: 2025-10-13 16:02:29.803 2 INFO nova.virt.libvirt.driver [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 13 12:02:29 np0005485008 nova_compute[192512]: 2025-10-13 16:02:29.825 2 DEBUG nova.compute.manager [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 13 12:02:29 np0005485008 podman[222528]: 2025-10-13 16:02:29.831648471 +0000 UTC m=+0.120344741 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 12:02:29 np0005485008 nova_compute[192512]: 2025-10-13 16:02:29.927 2 DEBUG nova.compute.manager [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 13 12:02:29 np0005485008 nova_compute[192512]: 2025-10-13 16:02:29.928 2 DEBUG nova.virt.libvirt.driver [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 13 12:02:29 np0005485008 nova_compute[192512]: 2025-10-13 16:02:29.929 2 INFO nova.virt.libvirt.driver [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Creating image(s)#033[00m
Oct 13 12:02:29 np0005485008 nova_compute[192512]: 2025-10-13 16:02:29.929 2 DEBUG oslo_concurrency.lockutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "/var/lib/nova/instances/932125ca-093f-4fd1-b20d-0da836590da3/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:02:29 np0005485008 nova_compute[192512]: 2025-10-13 16:02:29.930 2 DEBUG oslo_concurrency.lockutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "/var/lib/nova/instances/932125ca-093f-4fd1-b20d-0da836590da3/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:02:29 np0005485008 nova_compute[192512]: 2025-10-13 16:02:29.930 2 DEBUG oslo_concurrency.lockutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "/var/lib/nova/instances/932125ca-093f-4fd1-b20d-0da836590da3/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:02:29 np0005485008 nova_compute[192512]: 2025-10-13 16:02:29.957 2 DEBUG oslo_concurrency.processutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:02:30 np0005485008 nova_compute[192512]: 2025-10-13 16:02:30.012 2 DEBUG oslo_concurrency.processutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:02:30 np0005485008 nova_compute[192512]: 2025-10-13 16:02:30.013 2 DEBUG oslo_concurrency.lockutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:02:30 np0005485008 nova_compute[192512]: 2025-10-13 16:02:30.014 2 DEBUG oslo_concurrency.lockutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:02:30 np0005485008 nova_compute[192512]: 2025-10-13 16:02:30.027 2 DEBUG oslo_concurrency.processutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:02:30 np0005485008 nova_compute[192512]: 2025-10-13 16:02:30.094 2 DEBUG oslo_concurrency.processutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:02:30 np0005485008 nova_compute[192512]: 2025-10-13 16:02:30.095 2 DEBUG oslo_concurrency.processutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/932125ca-093f-4fd1-b20d-0da836590da3/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:02:30 np0005485008 nova_compute[192512]: 2025-10-13 16:02:30.125 2 DEBUG oslo_concurrency.processutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/932125ca-093f-4fd1-b20d-0da836590da3/disk 1073741824" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:02:30 np0005485008 nova_compute[192512]: 2025-10-13 16:02:30.126 2 DEBUG oslo_concurrency.lockutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:02:30 np0005485008 nova_compute[192512]: 2025-10-13 16:02:30.127 2 DEBUG oslo_concurrency.processutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:02:30 np0005485008 nova_compute[192512]: 2025-10-13 16:02:30.203 2 DEBUG oslo_concurrency.processutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:02:30 np0005485008 nova_compute[192512]: 2025-10-13 16:02:30.207 2 DEBUG nova.virt.disk.api [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Checking if we can resize image /var/lib/nova/instances/932125ca-093f-4fd1-b20d-0da836590da3/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 12:02:30 np0005485008 nova_compute[192512]: 2025-10-13 16:02:30.208 2 DEBUG oslo_concurrency.processutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/932125ca-093f-4fd1-b20d-0da836590da3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:02:30 np0005485008 nova_compute[192512]: 2025-10-13 16:02:30.270 2 DEBUG oslo_concurrency.processutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/932125ca-093f-4fd1-b20d-0da836590da3/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:02:30 np0005485008 nova_compute[192512]: 2025-10-13 16:02:30.271 2 DEBUG nova.virt.disk.api [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Cannot resize image /var/lib/nova/instances/932125ca-093f-4fd1-b20d-0da836590da3/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 12:02:30 np0005485008 nova_compute[192512]: 2025-10-13 16:02:30.271 2 DEBUG nova.objects.instance [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lazy-loading 'migration_context' on Instance uuid 932125ca-093f-4fd1-b20d-0da836590da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:02:30 np0005485008 nova_compute[192512]: 2025-10-13 16:02:30.289 2 DEBUG nova.virt.libvirt.driver [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 13 12:02:30 np0005485008 nova_compute[192512]: 2025-10-13 16:02:30.290 2 DEBUG nova.virt.libvirt.driver [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Ensure instance console log exists: /var/lib/nova/instances/932125ca-093f-4fd1-b20d-0da836590da3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 13 12:02:30 np0005485008 nova_compute[192512]: 2025-10-13 16:02:30.291 2 DEBUG oslo_concurrency.lockutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:02:30 np0005485008 nova_compute[192512]: 2025-10-13 16:02:30.291 2 DEBUG oslo_concurrency.lockutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:02:30 np0005485008 nova_compute[192512]: 2025-10-13 16:02:30.292 2 DEBUG oslo_concurrency.lockutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:02:31 np0005485008 nova_compute[192512]: 2025-10-13 16:02:31.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:31.072 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:02:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:31.073 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 12:02:31 np0005485008 nova_compute[192512]: 2025-10-13 16:02:31.252 2 DEBUG nova.network.neutron [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Successfully created port: d3b861e4-8a7a-4e22-a5b3-75c71bbef549 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 13 12:02:31 np0005485008 nova_compute[192512]: 2025-10-13 16:02:31.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:32 np0005485008 nova_compute[192512]: 2025-10-13 16:02:32.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:33 np0005485008 nova_compute[192512]: 2025-10-13 16:02:33.295 2 DEBUG nova.network.neutron [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Successfully updated port: d3b861e4-8a7a-4e22-a5b3-75c71bbef549 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 13 12:02:33 np0005485008 nova_compute[192512]: 2025-10-13 16:02:33.333 2 DEBUG oslo_concurrency.lockutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "refresh_cache-932125ca-093f-4fd1-b20d-0da836590da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:02:33 np0005485008 nova_compute[192512]: 2025-10-13 16:02:33.334 2 DEBUG oslo_concurrency.lockutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquired lock "refresh_cache-932125ca-093f-4fd1-b20d-0da836590da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:02:33 np0005485008 nova_compute[192512]: 2025-10-13 16:02:33.335 2 DEBUG nova.network.neutron [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 12:02:33 np0005485008 nova_compute[192512]: 2025-10-13 16:02:33.447 2 DEBUG nova.compute.manager [req-b2ac2612-1a5b-4fed-88cc-a426150043db req-9be16f4f-b184-4c0f-bf53-a0e4eb614490 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Received event network-changed-d3b861e4-8a7a-4e22-a5b3-75c71bbef549 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:02:33 np0005485008 nova_compute[192512]: 2025-10-13 16:02:33.448 2 DEBUG nova.compute.manager [req-b2ac2612-1a5b-4fed-88cc-a426150043db req-9be16f4f-b184-4c0f-bf53-a0e4eb614490 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Refreshing instance network info cache due to event network-changed-d3b861e4-8a7a-4e22-a5b3-75c71bbef549. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 13 12:02:33 np0005485008 nova_compute[192512]: 2025-10-13 16:02:33.449 2 DEBUG oslo_concurrency.lockutils [req-b2ac2612-1a5b-4fed-88cc-a426150043db req-9be16f4f-b184-4c0f-bf53-a0e4eb614490 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-932125ca-093f-4fd1-b20d-0da836590da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:02:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:33.966 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:02:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:33.967 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:02:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:33.967 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:02:34 np0005485008 nova_compute[192512]: 2025-10-13 16:02:34.015 2 DEBUG nova.network.neutron [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.409 2 DEBUG nova.network.neutron [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Updating instance_info_cache with network_info: [{"id": "d3b861e4-8a7a-4e22-a5b3-75c71bbef549", "address": "fa:16:3e:09:6c:0c", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b861e4-8a", "ovs_interfaceid": "d3b861e4-8a7a-4e22-a5b3-75c71bbef549", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.435 2 DEBUG oslo_concurrency.lockutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Releasing lock "refresh_cache-932125ca-093f-4fd1-b20d-0da836590da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.435 2 DEBUG nova.compute.manager [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Instance network_info: |[{"id": "d3b861e4-8a7a-4e22-a5b3-75c71bbef549", "address": "fa:16:3e:09:6c:0c", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b861e4-8a", "ovs_interfaceid": "d3b861e4-8a7a-4e22-a5b3-75c71bbef549", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.436 2 DEBUG oslo_concurrency.lockutils [req-b2ac2612-1a5b-4fed-88cc-a426150043db req-9be16f4f-b184-4c0f-bf53-a0e4eb614490 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-932125ca-093f-4fd1-b20d-0da836590da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.437 2 DEBUG nova.network.neutron [req-b2ac2612-1a5b-4fed-88cc-a426150043db req-9be16f4f-b184-4c0f-bf53-a0e4eb614490 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Refreshing network info cache for port d3b861e4-8a7a-4e22-a5b3-75c71bbef549 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.441 2 DEBUG nova.virt.libvirt.driver [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Start _get_guest_xml network_info=[{"id": "d3b861e4-8a7a-4e22-a5b3-75c71bbef549", "address": "fa:16:3e:09:6c:0c", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b861e4-8a", "ovs_interfaceid": "d3b861e4-8a7a-4e22-a5b3-75c71bbef549", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'boot_index': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': 'dcd9fbd3-16ab-46e1-976e-0576b433c9d5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.449 2 WARNING nova.virt.libvirt.driver [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.455 2 DEBUG nova.virt.libvirt.host [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.456 2 DEBUG nova.virt.libvirt.host [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.463 2 DEBUG nova.virt.libvirt.host [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.464 2 DEBUG nova.virt.libvirt.host [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.464 2 DEBUG nova.virt.libvirt.driver [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.465 2 DEBUG nova.virt.hardware [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-13T15:39:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ddff5c88-cac9-460e-8ffb-1e9b9a7c2a59',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.465 2 DEBUG nova.virt.hardware [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.465 2 DEBUG nova.virt.hardware [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.466 2 DEBUG nova.virt.hardware [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.466 2 DEBUG nova.virt.hardware [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.466 2 DEBUG nova.virt.hardware [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.466 2 DEBUG nova.virt.hardware [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.466 2 DEBUG nova.virt.hardware [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.467 2 DEBUG nova.virt.hardware [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.467 2 DEBUG nova.virt.hardware [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.467 2 DEBUG nova.virt.hardware [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.470 2 DEBUG nova.virt.libvirt.vif [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T16:02:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-906170487',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-906170487',id=20,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-mzqzl357',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T16:02:29Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=932125ca-093f-4fd1-b20d-0da836590da3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3b861e4-8a7a-4e22-a5b3-75c71bbef549", "address": "fa:16:3e:09:6c:0c", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b861e4-8a", "ovs_interfaceid": "d3b861e4-8a7a-4e22-a5b3-75c71bbef549", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.471 2 DEBUG nova.network.os_vif_util [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converting VIF {"id": "d3b861e4-8a7a-4e22-a5b3-75c71bbef549", "address": "fa:16:3e:09:6c:0c", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b861e4-8a", "ovs_interfaceid": "d3b861e4-8a7a-4e22-a5b3-75c71bbef549", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.471 2 DEBUG nova.network.os_vif_util [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:6c:0c,bridge_name='br-int',has_traffic_filtering=True,id=d3b861e4-8a7a-4e22-a5b3-75c71bbef549,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b861e4-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.472 2 DEBUG nova.objects.instance [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lazy-loading 'pci_devices' on Instance uuid 932125ca-093f-4fd1-b20d-0da836590da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.486 2 DEBUG nova.virt.libvirt.driver [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] End _get_guest_xml xml=<domain type="kvm">
Oct 13 12:02:35 np0005485008 nova_compute[192512]:  <uuid>932125ca-093f-4fd1-b20d-0da836590da3</uuid>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:  <name>instance-00000014</name>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:  <memory>131072</memory>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:  <vcpu>1</vcpu>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:  <metadata>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <nova:name>tempest-TestExecuteStrategies-server-906170487</nova:name>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <nova:creationTime>2025-10-13 16:02:35</nova:creationTime>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <nova:flavor name="m1.nano">
Oct 13 12:02:35 np0005485008 nova_compute[192512]:        <nova:memory>128</nova:memory>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:        <nova:disk>1</nova:disk>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:        <nova:swap>0</nova:swap>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:        <nova:ephemeral>0</nova:ephemeral>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:        <nova:vcpus>1</nova:vcpus>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      </nova:flavor>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <nova:owner>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:        <nova:user uuid="3f85e781b03b405795a2079908bd2792">tempest-TestExecuteStrategies-1416319229-project-admin</nova:user>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:        <nova:project uuid="4d9418fd42c841d38cbfc7819a3fca65">tempest-TestExecuteStrategies-1416319229</nova:project>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      </nova:owner>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <nova:root type="image" uuid="dcd9fbd3-16ab-46e1-976e-0576b433c9d5"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <nova:ports>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:        <nova:port uuid="d3b861e4-8a7a-4e22-a5b3-75c71bbef549">
Oct 13 12:02:35 np0005485008 nova_compute[192512]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:        </nova:port>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      </nova:ports>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    </nova:instance>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:  </metadata>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:  <sysinfo type="smbios">
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <system>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <entry name="manufacturer">RDO</entry>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <entry name="product">OpenStack Compute</entry>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <entry name="serial">932125ca-093f-4fd1-b20d-0da836590da3</entry>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <entry name="uuid">932125ca-093f-4fd1-b20d-0da836590da3</entry>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <entry name="family">Virtual Machine</entry>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    </system>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:  </sysinfo>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:  <os>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <boot dev="hd"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <smbios mode="sysinfo"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:  </os>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:  <features>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <acpi/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <apic/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <vmcoreinfo/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:  </features>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:  <clock offset="utc">
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <timer name="pit" tickpolicy="delay"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <timer name="hpet" present="no"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:  </clock>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:  <cpu mode="host-model" match="exact">
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <topology sockets="1" cores="1" threads="1"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:  </cpu>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:  <devices>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <disk type="file" device="disk">
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <driver name="qemu" type="qcow2" cache="none"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/932125ca-093f-4fd1-b20d-0da836590da3/disk"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <target dev="vda" bus="virtio"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    </disk>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <disk type="file" device="cdrom">
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <driver name="qemu" type="raw" cache="none"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/932125ca-093f-4fd1-b20d-0da836590da3/disk.config"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <target dev="sda" bus="sata"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    </disk>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <interface type="ethernet">
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <mac address="fa:16:3e:09:6c:0c"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <driver name="vhost" rx_queue_size="512"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <mtu size="1442"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <target dev="tapd3b861e4-8a"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    </interface>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <serial type="pty">
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <log file="/var/lib/nova/instances/932125ca-093f-4fd1-b20d-0da836590da3/console.log" append="off"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    </serial>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <video>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    </video>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <input type="tablet" bus="usb"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <rng model="virtio">
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <backend model="random">/dev/urandom</backend>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    </rng>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <controller type="usb" index="0"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    <memballoon model="virtio">
Oct 13 12:02:35 np0005485008 nova_compute[192512]:      <stats period="10"/>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:    </memballoon>
Oct 13 12:02:35 np0005485008 nova_compute[192512]:  </devices>
Oct 13 12:02:35 np0005485008 nova_compute[192512]: </domain>
Oct 13 12:02:35 np0005485008 nova_compute[192512]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.487 2 DEBUG nova.compute.manager [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Preparing to wait for external event network-vif-plugged-d3b861e4-8a7a-4e22-a5b3-75c71bbef549 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.487 2 DEBUG oslo_concurrency.lockutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "932125ca-093f-4fd1-b20d-0da836590da3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.488 2 DEBUG oslo_concurrency.lockutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "932125ca-093f-4fd1-b20d-0da836590da3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.488 2 DEBUG oslo_concurrency.lockutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "932125ca-093f-4fd1-b20d-0da836590da3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.488 2 DEBUG nova.virt.libvirt.vif [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T16:02:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-906170487',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-906170487',id=20,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-mzqzl357',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T16:02:29Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=932125ca-093f-4fd1-b20d-0da836590da3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3b861e4-8a7a-4e22-a5b3-75c71bbef549", "address": "fa:16:3e:09:6c:0c", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b861e4-8a", "ovs_interfaceid": "d3b861e4-8a7a-4e22-a5b3-75c71bbef549", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.489 2 DEBUG nova.network.os_vif_util [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converting VIF {"id": "d3b861e4-8a7a-4e22-a5b3-75c71bbef549", "address": "fa:16:3e:09:6c:0c", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b861e4-8a", "ovs_interfaceid": "d3b861e4-8a7a-4e22-a5b3-75c71bbef549", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.489 2 DEBUG nova.network.os_vif_util [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:6c:0c,bridge_name='br-int',has_traffic_filtering=True,id=d3b861e4-8a7a-4e22-a5b3-75c71bbef549,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b861e4-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.489 2 DEBUG os_vif [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:6c:0c,bridge_name='br-int',has_traffic_filtering=True,id=d3b861e4-8a7a-4e22-a5b3-75c71bbef549,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b861e4-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.490 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.490 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.494 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3b861e4-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.494 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd3b861e4-8a, col_values=(('external_ids', {'iface-id': 'd3b861e4-8a7a-4e22-a5b3-75c71bbef549', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:6c:0c', 'vm-uuid': '932125ca-093f-4fd1-b20d-0da836590da3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:35 np0005485008 NetworkManager[51587]: <info>  [1760371355.4968] manager: (tapd3b861e4-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.503 2 INFO os_vif [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:6c:0c,bridge_name='br-int',has_traffic_filtering=True,id=d3b861e4-8a7a-4e22-a5b3-75c71bbef549,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b861e4-8a')#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.618 2 DEBUG nova.virt.libvirt.driver [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.618 2 DEBUG nova.virt.libvirt.driver [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.619 2 DEBUG nova.virt.libvirt.driver [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] No VIF found with MAC fa:16:3e:09:6c:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 13 12:02:35 np0005485008 nova_compute[192512]: 2025-10-13 16:02:35.620 2 INFO nova.virt.libvirt.driver [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Using config drive#033[00m
Oct 13 12:02:35 np0005485008 podman[202884]: time="2025-10-13T16:02:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:02:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:02:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:02:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:02:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3009 "" "Go-http-client/1.1"
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.075 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:02:36 np0005485008 nova_compute[192512]: 2025-10-13 16:02:36.195 2 INFO nova.virt.libvirt.driver [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Creating config drive at /var/lib/nova/instances/932125ca-093f-4fd1-b20d-0da836590da3/disk.config#033[00m
Oct 13 12:02:36 np0005485008 nova_compute[192512]: 2025-10-13 16:02:36.201 2 DEBUG oslo_concurrency.processutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/932125ca-093f-4fd1-b20d-0da836590da3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppeham6ro execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:02:36 np0005485008 nova_compute[192512]: 2025-10-13 16:02:36.323 2 DEBUG oslo_concurrency.processutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/932125ca-093f-4fd1-b20d-0da836590da3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppeham6ro" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:02:36 np0005485008 kernel: tapd3b861e4-8a: entered promiscuous mode
Oct 13 12:02:36 np0005485008 NetworkManager[51587]: <info>  [1760371356.3851] manager: (tapd3b861e4-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/77)
Oct 13 12:02:36 np0005485008 ovn_controller[94758]: 2025-10-13T16:02:36Z|00208|binding|INFO|Claiming lport d3b861e4-8a7a-4e22-a5b3-75c71bbef549 for this chassis.
Oct 13 12:02:36 np0005485008 ovn_controller[94758]: 2025-10-13T16:02:36Z|00209|binding|INFO|d3b861e4-8a7a-4e22-a5b3-75c71bbef549: Claiming fa:16:3e:09:6c:0c 10.100.0.9
Oct 13 12:02:36 np0005485008 nova_compute[192512]: 2025-10-13 16:02:36.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.391 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:6c:0c 10.100.0.9'], port_security=['fa:16:3e:09:6c:0c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '932125ca-093f-4fd1-b20d-0da836590da3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=d3b861e4-8a7a-4e22-a5b3-75c71bbef549) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.393 103642 INFO neutron.agent.ovn.metadata.agent [-] Port d3b861e4-8a7a-4e22-a5b3-75c71bbef549 in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae bound to our chassis#033[00m
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.394 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae#033[00m
Oct 13 12:02:36 np0005485008 ovn_controller[94758]: 2025-10-13T16:02:36Z|00210|binding|INFO|Setting lport d3b861e4-8a7a-4e22-a5b3-75c71bbef549 ovn-installed in OVS
Oct 13 12:02:36 np0005485008 ovn_controller[94758]: 2025-10-13T16:02:36Z|00211|binding|INFO|Setting lport d3b861e4-8a7a-4e22-a5b3-75c71bbef549 up in Southbound
Oct 13 12:02:36 np0005485008 nova_compute[192512]: 2025-10-13 16:02:36.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:36 np0005485008 nova_compute[192512]: 2025-10-13 16:02:36.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.408 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[d37b8bdf-e037-4a4a-ada4-6eb9a2396df5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.409 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap39a43da9-c1 in ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.412 214965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap39a43da9-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.412 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[982a9247-6640-4e3a-ae5b-db5921c670cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.414 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[f34f6387-7a61-4e1c-81cb-abaee76109e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:02:36 np0005485008 systemd-udevd[222663]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.425 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[2317162d-7924-46ec-af6f-c541d9432178]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:02:36 np0005485008 NetworkManager[51587]: <info>  [1760371356.4366] device (tapd3b861e4-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 12:02:36 np0005485008 NetworkManager[51587]: <info>  [1760371356.4378] device (tapd3b861e4-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 12:02:36 np0005485008 systemd-machined[152551]: New machine qemu-17-instance-00000014.
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.443 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[31fe1cf1-7980-4513-8e9c-080f4105240a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:02:36 np0005485008 systemd[1]: Started Virtual Machine qemu-17-instance-00000014.
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.476 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[d73a2523-fd9f-4045-b796-39dc5422e48a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:02:36 np0005485008 NetworkManager[51587]: <info>  [1760371356.4838] manager: (tap39a43da9-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/78)
Oct 13 12:02:36 np0005485008 systemd-udevd[222670]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.483 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[5ed0117f-f6fb-4caf-a9f0-e8d38ca29487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.518 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[a765743d-f76f-46cf-b214-4f20fc6455de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.522 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd84b2f-4b99-46eb-9508-2974d79f32b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:02:36 np0005485008 NetworkManager[51587]: <info>  [1760371356.5485] device (tap39a43da9-c0): carrier: link connected
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.558 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[07323772-eccc-4103-8439-5f5c5ace079c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.580 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ced7b7-ad3e-42b8-bd99-c986b465e12f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489421, 'reachable_time': 20345, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 222698, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:02:36 np0005485008 nova_compute[192512]: 2025-10-13 16:02:36.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.601 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[e06bb3f8-c623-4213-937b-634806a37ec6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:43e9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489421, 'tstamp': 489421}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 222699, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.622 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2da29d-bfc0-458e-8001-ae75e8b64ada]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489421, 'reachable_time': 20345, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 222700, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.653 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[b208a05d-e49f-45e6-ae8d-e016b38ab143]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:02:36 np0005485008 nova_compute[192512]: 2025-10-13 16:02:36.699 2 DEBUG nova.compute.manager [req-d401ad8c-7bfe-4982-873d-ac7c83ceef93 req-14fa20c4-14ce-4f71-8a52-77eab8f1ad12 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Received event network-vif-plugged-d3b861e4-8a7a-4e22-a5b3-75c71bbef549 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:02:36 np0005485008 nova_compute[192512]: 2025-10-13 16:02:36.699 2 DEBUG oslo_concurrency.lockutils [req-d401ad8c-7bfe-4982-873d-ac7c83ceef93 req-14fa20c4-14ce-4f71-8a52-77eab8f1ad12 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "932125ca-093f-4fd1-b20d-0da836590da3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:02:36 np0005485008 nova_compute[192512]: 2025-10-13 16:02:36.699 2 DEBUG oslo_concurrency.lockutils [req-d401ad8c-7bfe-4982-873d-ac7c83ceef93 req-14fa20c4-14ce-4f71-8a52-77eab8f1ad12 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "932125ca-093f-4fd1-b20d-0da836590da3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:02:36 np0005485008 nova_compute[192512]: 2025-10-13 16:02:36.700 2 DEBUG oslo_concurrency.lockutils [req-d401ad8c-7bfe-4982-873d-ac7c83ceef93 req-14fa20c4-14ce-4f71-8a52-77eab8f1ad12 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "932125ca-093f-4fd1-b20d-0da836590da3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:02:36 np0005485008 nova_compute[192512]: 2025-10-13 16:02:36.700 2 DEBUG nova.compute.manager [req-d401ad8c-7bfe-4982-873d-ac7c83ceef93 req-14fa20c4-14ce-4f71-8a52-77eab8f1ad12 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Processing event network-vif-plugged-d3b861e4-8a7a-4e22-a5b3-75c71bbef549 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.734 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[1570ec92-25b1-4177-a284-32e3650ec3cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.735 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.736 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.736 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39a43da9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:02:36 np0005485008 NetworkManager[51587]: <info>  [1760371356.7393] manager: (tap39a43da9-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Oct 13 12:02:36 np0005485008 kernel: tap39a43da9-c0: entered promiscuous mode
Oct 13 12:02:36 np0005485008 nova_compute[192512]: 2025-10-13 16:02:36.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.746 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39a43da9-c0, col_values=(('external_ids', {'iface-id': '5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:02:36 np0005485008 nova_compute[192512]: 2025-10-13 16:02:36.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:36 np0005485008 ovn_controller[94758]: 2025-10-13T16:02:36Z|00212|binding|INFO|Releasing lport 5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182 from this chassis (sb_readonly=0)
Oct 13 12:02:36 np0005485008 nova_compute[192512]: 2025-10-13 16:02:36.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.760 103642 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/39a43da9-cf4c-4fe3-ab73-bf8705320dae.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/39a43da9-cf4c-4fe3-ab73-bf8705320dae.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.762 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[dd3baa17-1fb2-4be5-8939-5cff784bf1aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.762 103642 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: global
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]:    log         /dev/log local0 debug
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]:    log-tag     haproxy-metadata-proxy-39a43da9-cf4c-4fe3-ab73-bf8705320dae
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]:    user        root
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]:    group       root
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]:    maxconn     1024
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]:    pidfile     /var/lib/neutron/external/pids/39a43da9-cf4c-4fe3-ab73-bf8705320dae.pid.haproxy
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]:    daemon
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: defaults
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]:    log global
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]:    mode http
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]:    option httplog
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]:    option dontlognull
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]:    option http-server-close
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]:    option forwardfor
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]:    retries                 3
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]:    timeout http-request    30s
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]:    timeout connect         30s
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]:    timeout client          32s
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]:    timeout server          32s
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]:    timeout http-keep-alive 30s
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: listen listener
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]:    bind 169.254.169.254:80
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]:    server metadata /var/lib/neutron/metadata_proxy
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]:    http-request add-header X-OVN-Network-ID 39a43da9-cf4c-4fe3-ab73-bf8705320dae
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 13 12:02:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:02:36.763 103642 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'env', 'PROCESS_TAG=haproxy-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/39a43da9-cf4c-4fe3-ab73-bf8705320dae.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 13 12:02:37 np0005485008 podman[222732]: 2025-10-13 16:02:37.158410981 +0000 UTC m=+0.065363184 container create bb27b8b98c01b7faccf37977d87f062e7687da201b6569fb0c0da913a778d124 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 13 12:02:37 np0005485008 systemd[1]: Started libpod-conmon-bb27b8b98c01b7faccf37977d87f062e7687da201b6569fb0c0da913a778d124.scope.
Oct 13 12:02:37 np0005485008 podman[222732]: 2025-10-13 16:02:37.119073815 +0000 UTC m=+0.026026108 image pull f5d8e43d5fd113645e71c7e8980543c233298a7561a4c95ab3af27cb119e70b7 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 13 12:02:37 np0005485008 systemd[1]: Started libcrun container.
Oct 13 12:02:37 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05405805cdc3f07a4969ac036a712cb6ff08cd2e958d3842742b59d3e4acb1eb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 12:02:37 np0005485008 podman[222732]: 2025-10-13 16:02:37.249191363 +0000 UTC m=+0.156143586 container init bb27b8b98c01b7faccf37977d87f062e7687da201b6569fb0c0da913a778d124 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 13 12:02:37 np0005485008 podman[222732]: 2025-10-13 16:02:37.255263624 +0000 UTC m=+0.162215827 container start bb27b8b98c01b7faccf37977d87f062e7687da201b6569fb0c0da913a778d124 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 13 12:02:37 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[222754]: [NOTICE]   (222758) : New worker (222760) forked
Oct 13 12:02:37 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[222754]: [NOTICE]   (222758) : Loading success.
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.298 2 DEBUG nova.network.neutron [req-b2ac2612-1a5b-4fed-88cc-a426150043db req-9be16f4f-b184-4c0f-bf53-a0e4eb614490 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Updated VIF entry in instance network info cache for port d3b861e4-8a7a-4e22-a5b3-75c71bbef549. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.298 2 DEBUG nova.network.neutron [req-b2ac2612-1a5b-4fed-88cc-a426150043db req-9be16f4f-b184-4c0f-bf53-a0e4eb614490 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Updating instance_info_cache with network_info: [{"id": "d3b861e4-8a7a-4e22-a5b3-75c71bbef549", "address": "fa:16:3e:09:6c:0c", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b861e4-8a", "ovs_interfaceid": "d3b861e4-8a7a-4e22-a5b3-75c71bbef549", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.319 2 DEBUG oslo_concurrency.lockutils [req-b2ac2612-1a5b-4fed-88cc-a426150043db req-9be16f4f-b184-4c0f-bf53-a0e4eb614490 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-932125ca-093f-4fd1-b20d-0da836590da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.641 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760371357.641167, 932125ca-093f-4fd1-b20d-0da836590da3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.642 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] VM Started (Lifecycle Event)#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.645 2 DEBUG nova.compute.manager [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.649 2 DEBUG nova.virt.libvirt.driver [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.653 2 INFO nova.virt.libvirt.driver [-] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Instance spawned successfully.#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.653 2 DEBUG nova.virt.libvirt.driver [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.663 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.666 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.679 2 DEBUG nova.virt.libvirt.driver [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.680 2 DEBUG nova.virt.libvirt.driver [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.680 2 DEBUG nova.virt.libvirt.driver [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.680 2 DEBUG nova.virt.libvirt.driver [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.681 2 DEBUG nova.virt.libvirt.driver [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.681 2 DEBUG nova.virt.libvirt.driver [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.690 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.690 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760371357.6413527, 932125ca-093f-4fd1-b20d-0da836590da3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.691 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] VM Paused (Lifecycle Event)#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.716 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.720 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760371357.6479952, 932125ca-093f-4fd1-b20d-0da836590da3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.720 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] VM Resumed (Lifecycle Event)#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.774 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.778 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.821 2 INFO nova.compute.manager [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Took 7.89 seconds to spawn the instance on the hypervisor.#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.821 2 DEBUG nova.compute.manager [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.822 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.888 2 INFO nova.compute.manager [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Took 8.59 seconds to build instance.#033[00m
Oct 13 12:02:37 np0005485008 nova_compute[192512]: 2025-10-13 16:02:37.918 2 DEBUG oslo_concurrency.lockutils [None req-13158649-62ae-4edb-959a-f0daadbe3abe 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "932125ca-093f-4fd1-b20d-0da836590da3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:02:38 np0005485008 nova_compute[192512]: 2025-10-13 16:02:38.853 2 DEBUG nova.compute.manager [req-ebed930f-9167-40ea-9d26-37a71c050486 req-68cb3c91-275c-4141-abc3-82e5714e07fd 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Received event network-vif-plugged-d3b861e4-8a7a-4e22-a5b3-75c71bbef549 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:02:38 np0005485008 nova_compute[192512]: 2025-10-13 16:02:38.854 2 DEBUG oslo_concurrency.lockutils [req-ebed930f-9167-40ea-9d26-37a71c050486 req-68cb3c91-275c-4141-abc3-82e5714e07fd 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "932125ca-093f-4fd1-b20d-0da836590da3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:02:38 np0005485008 nova_compute[192512]: 2025-10-13 16:02:38.854 2 DEBUG oslo_concurrency.lockutils [req-ebed930f-9167-40ea-9d26-37a71c050486 req-68cb3c91-275c-4141-abc3-82e5714e07fd 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "932125ca-093f-4fd1-b20d-0da836590da3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:02:38 np0005485008 nova_compute[192512]: 2025-10-13 16:02:38.855 2 DEBUG oslo_concurrency.lockutils [req-ebed930f-9167-40ea-9d26-37a71c050486 req-68cb3c91-275c-4141-abc3-82e5714e07fd 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "932125ca-093f-4fd1-b20d-0da836590da3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:02:38 np0005485008 nova_compute[192512]: 2025-10-13 16:02:38.855 2 DEBUG nova.compute.manager [req-ebed930f-9167-40ea-9d26-37a71c050486 req-68cb3c91-275c-4141-abc3-82e5714e07fd 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] No waiting events found dispatching network-vif-plugged-d3b861e4-8a7a-4e22-a5b3-75c71bbef549 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:02:38 np0005485008 nova_compute[192512]: 2025-10-13 16:02:38.856 2 WARNING nova.compute.manager [req-ebed930f-9167-40ea-9d26-37a71c050486 req-68cb3c91-275c-4141-abc3-82e5714e07fd 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Received unexpected event network-vif-plugged-d3b861e4-8a7a-4e22-a5b3-75c71bbef549 for instance with vm_state active and task_state None.#033[00m
Oct 13 12:02:40 np0005485008 nova_compute[192512]: 2025-10-13 16:02:40.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:41 np0005485008 nova_compute[192512]: 2025-10-13 16:02:41.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:43 np0005485008 podman[222769]: 2025-10-13 16:02:43.763757704 +0000 UTC m=+0.064211038 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 13 12:02:45 np0005485008 nova_compute[192512]: 2025-10-13 16:02:45.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:46 np0005485008 nova_compute[192512]: 2025-10-13 16:02:46.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:02:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:02:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:02:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:02:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:02:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:02:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:02:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:02:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:02:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:02:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:02:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:02:50 np0005485008 nova_compute[192512]: 2025-10-13 16:02:50.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:50 np0005485008 ovn_controller[94758]: 2025-10-13T16:02:50Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:6c:0c 10.100.0.9
Oct 13 12:02:50 np0005485008 ovn_controller[94758]: 2025-10-13T16:02:50Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:6c:0c 10.100.0.9
Oct 13 12:02:51 np0005485008 nova_compute[192512]: 2025-10-13 16:02:51.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:55 np0005485008 nova_compute[192512]: 2025-10-13 16:02:55.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:02:56 np0005485008 nova_compute[192512]: 2025-10-13 16:02:56.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:00 np0005485008 nova_compute[192512]: 2025-10-13 16:03:00.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:00 np0005485008 podman[222809]: 2025-10-13 16:03:00.768766646 +0000 UTC m=+0.063751423 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 12:03:00 np0005485008 podman[222811]: 2025-10-13 16:03:00.769691256 +0000 UTC m=+0.059245412 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 13 12:03:00 np0005485008 podman[222812]: 2025-10-13 16:03:00.79179286 +0000 UTC m=+0.077133135 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 12:03:00 np0005485008 podman[222810]: 2025-10-13 16:03:00.799175521 +0000 UTC m=+0.091406501 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 13 12:03:00 np0005485008 podman[222818]: 2025-10-13 16:03:00.833908553 +0000 UTC m=+0.113310541 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 12:03:01 np0005485008 nova_compute[192512]: 2025-10-13 16:03:01.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:05 np0005485008 nova_compute[192512]: 2025-10-13 16:03:05.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:05 np0005485008 podman[202884]: time="2025-10-13T16:03:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:03:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:03:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 12:03:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:03:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3465 "" "Go-http-client/1.1"
Oct 13 12:03:06 np0005485008 ovn_controller[94758]: 2025-10-13T16:03:06Z|00213|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Oct 13 12:03:06 np0005485008 nova_compute[192512]: 2025-10-13 16:03:06.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:10 np0005485008 nova_compute[192512]: 2025-10-13 16:03:10.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:11 np0005485008 nova_compute[192512]: 2025-10-13 16:03:11.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:14 np0005485008 nova_compute[192512]: 2025-10-13 16:03:14.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:03:14 np0005485008 nova_compute[192512]: 2025-10-13 16:03:14.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:03:14 np0005485008 podman[222914]: 2025-10-13 16:03:14.778529519 +0000 UTC m=+0.077393732 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 12:03:15 np0005485008 nova_compute[192512]: 2025-10-13 16:03:15.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:16 np0005485008 nova_compute[192512]: 2025-10-13 16:03:16.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:17 np0005485008 nova_compute[192512]: 2025-10-13 16:03:17.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:03:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:03:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:03:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:03:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:03:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:03:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:03:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:03:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:03:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:03:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:03:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:03:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:03:20 np0005485008 nova_compute[192512]: 2025-10-13 16:03:20.422 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:03:20 np0005485008 nova_compute[192512]: 2025-10-13 16:03:20.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:03:20 np0005485008 nova_compute[192512]: 2025-10-13 16:03:20.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:21 np0005485008 nova_compute[192512]: 2025-10-13 16:03:21.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:03:21 np0005485008 nova_compute[192512]: 2025-10-13 16:03:21.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:22 np0005485008 nova_compute[192512]: 2025-10-13 16:03:22.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:03:23 np0005485008 nova_compute[192512]: 2025-10-13 16:03:23.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:03:23 np0005485008 nova_compute[192512]: 2025-10-13 16:03:23.586 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:03:23 np0005485008 nova_compute[192512]: 2025-10-13 16:03:23.586 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:03:23 np0005485008 nova_compute[192512]: 2025-10-13 16:03:23.586 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:03:23 np0005485008 nova_compute[192512]: 2025-10-13 16:03:23.587 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:03:23 np0005485008 nova_compute[192512]: 2025-10-13 16:03:23.727 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/932125ca-093f-4fd1-b20d-0da836590da3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:03:23 np0005485008 nova_compute[192512]: 2025-10-13 16:03:23.797 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/932125ca-093f-4fd1-b20d-0da836590da3/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:03:23 np0005485008 nova_compute[192512]: 2025-10-13 16:03:23.798 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/932125ca-093f-4fd1-b20d-0da836590da3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:03:23 np0005485008 nova_compute[192512]: 2025-10-13 16:03:23.861 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/932125ca-093f-4fd1-b20d-0da836590da3/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:03:24 np0005485008 nova_compute[192512]: 2025-10-13 16:03:24.020 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:03:24 np0005485008 nova_compute[192512]: 2025-10-13 16:03:24.021 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5695MB free_disk=73.43628692626953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:03:24 np0005485008 nova_compute[192512]: 2025-10-13 16:03:24.022 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:03:24 np0005485008 nova_compute[192512]: 2025-10-13 16:03:24.022 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:03:24 np0005485008 nova_compute[192512]: 2025-10-13 16:03:24.123 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance 932125ca-093f-4fd1-b20d-0da836590da3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 13 12:03:24 np0005485008 nova_compute[192512]: 2025-10-13 16:03:24.123 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:03:24 np0005485008 nova_compute[192512]: 2025-10-13 16:03:24.123 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:03:24 np0005485008 nova_compute[192512]: 2025-10-13 16:03:24.163 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing inventories for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 13 12:03:24 np0005485008 nova_compute[192512]: 2025-10-13 16:03:24.200 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating ProviderTree inventory for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 13 12:03:24 np0005485008 nova_compute[192512]: 2025-10-13 16:03:24.201 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating inventory in ProviderTree for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 13 12:03:24 np0005485008 nova_compute[192512]: 2025-10-13 16:03:24.219 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing aggregate associations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 13 12:03:24 np0005485008 nova_compute[192512]: 2025-10-13 16:03:24.262 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing trait associations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce, traits: HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,HW_CPU_X86_SSE42,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,HW_CPU_X86_BMI2,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 13 12:03:24 np0005485008 nova_compute[192512]: 2025-10-13 16:03:24.333 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:03:24 np0005485008 nova_compute[192512]: 2025-10-13 16:03:24.353 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:03:24 np0005485008 nova_compute[192512]: 2025-10-13 16:03:24.425 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:03:24 np0005485008 nova_compute[192512]: 2025-10-13 16:03:24.425 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:03:25 np0005485008 nova_compute[192512]: 2025-10-13 16:03:25.426 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:03:25 np0005485008 nova_compute[192512]: 2025-10-13 16:03:25.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:03:25 np0005485008 nova_compute[192512]: 2025-10-13 16:03:25.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:03:25 np0005485008 nova_compute[192512]: 2025-10-13 16:03:25.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:25 np0005485008 nova_compute[192512]: 2025-10-13 16:03:25.708 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "refresh_cache-932125ca-093f-4fd1-b20d-0da836590da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:03:25 np0005485008 nova_compute[192512]: 2025-10-13 16:03:25.708 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquired lock "refresh_cache-932125ca-093f-4fd1-b20d-0da836590da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:03:25 np0005485008 nova_compute[192512]: 2025-10-13 16:03:25.709 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 13 12:03:25 np0005485008 nova_compute[192512]: 2025-10-13 16:03:25.709 2 DEBUG nova.objects.instance [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 932125ca-093f-4fd1-b20d-0da836590da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:03:26 np0005485008 nova_compute[192512]: 2025-10-13 16:03:26.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:28 np0005485008 nova_compute[192512]: 2025-10-13 16:03:28.064 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Updating instance_info_cache with network_info: [{"id": "d3b861e4-8a7a-4e22-a5b3-75c71bbef549", "address": "fa:16:3e:09:6c:0c", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b861e4-8a", "ovs_interfaceid": "d3b861e4-8a7a-4e22-a5b3-75c71bbef549", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:03:28 np0005485008 nova_compute[192512]: 2025-10-13 16:03:28.090 2 DEBUG nova.virt.libvirt.driver [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Creating tmpfile /var/lib/nova/instances/tmpwuqy4qjr to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Oct 13 12:03:28 np0005485008 nova_compute[192512]: 2025-10-13 16:03:28.092 2 DEBUG nova.compute.manager [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=73728,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwuqy4qjr',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Oct 13 12:03:28 np0005485008 nova_compute[192512]: 2025-10-13 16:03:28.118 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Releasing lock "refresh_cache-932125ca-093f-4fd1-b20d-0da836590da3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:03:28 np0005485008 nova_compute[192512]: 2025-10-13 16:03:28.118 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 13 12:03:28 np0005485008 nova_compute[192512]: 2025-10-13 16:03:28.119 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:03:30 np0005485008 nova_compute[192512]: 2025-10-13 16:03:30.022 2 DEBUG nova.compute.manager [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwuqy4qjr',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='448b3b84-87cf-4053-951d-095f41bc7996',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Oct 13 12:03:30 np0005485008 nova_compute[192512]: 2025-10-13 16:03:30.061 2 DEBUG oslo_concurrency.lockutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-448b3b84-87cf-4053-951d-095f41bc7996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:03:30 np0005485008 nova_compute[192512]: 2025-10-13 16:03:30.062 2 DEBUG oslo_concurrency.lockutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-448b3b84-87cf-4053-951d-095f41bc7996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:03:30 np0005485008 nova_compute[192512]: 2025-10-13 16:03:30.062 2 DEBUG nova.network.neutron [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 12:03:30 np0005485008 nova_compute[192512]: 2025-10-13 16:03:30.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:31 np0005485008 nova_compute[192512]: 2025-10-13 16:03:31.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:31 np0005485008 podman[222947]: 2025-10-13 16:03:31.767141705 +0000 UTC m=+0.054504482 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 12:03:31 np0005485008 podman[222945]: 2025-10-13 16:03:31.770113829 +0000 UTC m=+0.062191865 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, managed_by=edpm_ansible)
Oct 13 12:03:31 np0005485008 podman[222944]: 2025-10-13 16:03:31.775223519 +0000 UTC m=+0.071028861 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 13 12:03:31 np0005485008 podman[222946]: 2025-10-13 16:03:31.804687005 +0000 UTC m=+0.093026673 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 12:03:31 np0005485008 podman[222948]: 2025-10-13 16:03:31.810547139 +0000 UTC m=+0.093629162 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 13 12:03:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:33.967 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:03:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:33.968 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:03:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:33.968 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.399 2 DEBUG nova.network.neutron [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Updating instance_info_cache with network_info: [{"id": "f6c12ba8-3bb6-47bf-a285-b860dedb2644", "address": "fa:16:3e:a8:9d:be", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6c12ba8-3b", "ovs_interfaceid": "f6c12ba8-3bb6-47bf-a285-b860dedb2644", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.425 2 DEBUG oslo_concurrency.lockutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-448b3b84-87cf-4053-951d-095f41bc7996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.428 2 DEBUG nova.virt.libvirt.driver [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwuqy4qjr',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='448b3b84-87cf-4053-951d-095f41bc7996',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.429 2 DEBUG nova.virt.libvirt.driver [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Creating instance directory: /var/lib/nova/instances/448b3b84-87cf-4053-951d-095f41bc7996 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.430 2 DEBUG nova.virt.libvirt.driver [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Creating disk.info with the contents: {'/var/lib/nova/instances/448b3b84-87cf-4053-951d-095f41bc7996/disk': 'qcow2', '/var/lib/nova/instances/448b3b84-87cf-4053-951d-095f41bc7996/disk.config': 'raw'} pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10854#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.430 2 DEBUG nova.virt.libvirt.driver [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10864#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.431 2 DEBUG nova.objects.instance [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 448b3b84-87cf-4053-951d-095f41bc7996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.473 2 DEBUG oslo_concurrency.processutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.540 2 DEBUG oslo_concurrency.processutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.542 2 DEBUG oslo_concurrency.lockutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.542 2 DEBUG oslo_concurrency.lockutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.555 2 DEBUG oslo_concurrency.processutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.619 2 DEBUG oslo_concurrency.processutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.620 2 DEBUG oslo_concurrency.processutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/448b3b84-87cf-4053-951d-095f41bc7996/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.665 2 DEBUG oslo_concurrency.processutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/448b3b84-87cf-4053-951d-095f41bc7996/disk 1073741824" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.666 2 DEBUG oslo_concurrency.lockutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.667 2 DEBUG oslo_concurrency.processutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.728 2 DEBUG oslo_concurrency.processutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.729 2 DEBUG nova.virt.disk.api [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Checking if we can resize image /var/lib/nova/instances/448b3b84-87cf-4053-951d-095f41bc7996/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.730 2 DEBUG oslo_concurrency.processutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/448b3b84-87cf-4053-951d-095f41bc7996/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.803 2 DEBUG oslo_concurrency.processutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/448b3b84-87cf-4053-951d-095f41bc7996/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.804 2 DEBUG nova.virt.disk.api [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Cannot resize image /var/lib/nova/instances/448b3b84-87cf-4053-951d-095f41bc7996/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.804 2 DEBUG nova.objects.instance [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'migration_context' on Instance uuid 448b3b84-87cf-4053-951d-095f41bc7996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.821 2 DEBUG oslo_concurrency.processutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/448b3b84-87cf-4053-951d-095f41bc7996/disk.config 485376 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.849 2 DEBUG oslo_concurrency.processutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/448b3b84-87cf-4053-951d-095f41bc7996/disk.config 485376" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.851 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Copying file compute-0.ctlplane.example.com:/var/lib/nova/instances/448b3b84-87cf-4053-951d-095f41bc7996/disk.config to /var/lib/nova/instances/448b3b84-87cf-4053-951d-095f41bc7996 copy_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:103#033[00m
Oct 13 12:03:34 np0005485008 nova_compute[192512]: 2025-10-13 16:03:34.851 2 DEBUG oslo_concurrency.processutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/448b3b84-87cf-4053-951d-095f41bc7996/disk.config /var/lib/nova/instances/448b3b84-87cf-4053-951d-095f41bc7996 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:03:35 np0005485008 nova_compute[192512]: 2025-10-13 16:03:35.286 2 DEBUG oslo_concurrency.processutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/448b3b84-87cf-4053-951d-095f41bc7996/disk.config /var/lib/nova/instances/448b3b84-87cf-4053-951d-095f41bc7996" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:03:35 np0005485008 nova_compute[192512]: 2025-10-13 16:03:35.287 2 DEBUG nova.virt.libvirt.driver [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Oct 13 12:03:35 np0005485008 nova_compute[192512]: 2025-10-13 16:03:35.288 2 DEBUG nova.virt.libvirt.vif [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T16:02:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-690751275',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-690751275',id=19,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T16:02:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-sed9za92',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T16:02:20Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=448b3b84-87cf-4053-951d-095f41bc7996,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f6c12ba8-3bb6-47bf-a285-b860dedb2644", "address": "fa:16:3e:a8:9d:be", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapf6c12ba8-3b", "ovs_interfaceid": "f6c12ba8-3bb6-47bf-a285-b860dedb2644", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 12:03:35 np0005485008 nova_compute[192512]: 2025-10-13 16:03:35.288 2 DEBUG nova.network.os_vif_util [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converting VIF {"id": "f6c12ba8-3bb6-47bf-a285-b860dedb2644", "address": "fa:16:3e:a8:9d:be", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapf6c12ba8-3b", "ovs_interfaceid": "f6c12ba8-3bb6-47bf-a285-b860dedb2644", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:03:35 np0005485008 nova_compute[192512]: 2025-10-13 16:03:35.290 2 DEBUG nova.network.os_vif_util [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:9d:be,bridge_name='br-int',has_traffic_filtering=True,id=f6c12ba8-3bb6-47bf-a285-b860dedb2644,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6c12ba8-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:03:35 np0005485008 nova_compute[192512]: 2025-10-13 16:03:35.291 2 DEBUG os_vif [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:9d:be,bridge_name='br-int',has_traffic_filtering=True,id=f6c12ba8-3bb6-47bf-a285-b860dedb2644,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6c12ba8-3b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 12:03:35 np0005485008 nova_compute[192512]: 2025-10-13 16:03:35.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:35 np0005485008 nova_compute[192512]: 2025-10-13 16:03:35.292 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:03:35 np0005485008 nova_compute[192512]: 2025-10-13 16:03:35.292 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:03:35 np0005485008 nova_compute[192512]: 2025-10-13 16:03:35.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:35 np0005485008 nova_compute[192512]: 2025-10-13 16:03:35.295 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6c12ba8-3b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:03:35 np0005485008 nova_compute[192512]: 2025-10-13 16:03:35.296 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf6c12ba8-3b, col_values=(('external_ids', {'iface-id': 'f6c12ba8-3bb6-47bf-a285-b860dedb2644', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:9d:be', 'vm-uuid': '448b3b84-87cf-4053-951d-095f41bc7996'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:03:35 np0005485008 NetworkManager[51587]: <info>  [1760371415.2983] manager: (tapf6c12ba8-3b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Oct 13 12:03:35 np0005485008 nova_compute[192512]: 2025-10-13 16:03:35.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:03:35 np0005485008 nova_compute[192512]: 2025-10-13 16:03:35.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:35 np0005485008 nova_compute[192512]: 2025-10-13 16:03:35.308 2 INFO os_vif [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:9d:be,bridge_name='br-int',has_traffic_filtering=True,id=f6c12ba8-3bb6-47bf-a285-b860dedb2644,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6c12ba8-3b')#033[00m
Oct 13 12:03:35 np0005485008 nova_compute[192512]: 2025-10-13 16:03:35.308 2 DEBUG nova.virt.libvirt.driver [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Oct 13 12:03:35 np0005485008 nova_compute[192512]: 2025-10-13 16:03:35.309 2 DEBUG nova.compute.manager [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwuqy4qjr',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='448b3b84-87cf-4053-951d-095f41bc7996',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Oct 13 12:03:35 np0005485008 podman[202884]: time="2025-10-13T16:03:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:03:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:03:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 12:03:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:03:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3470 "" "Go-http-client/1.1"
Oct 13 12:03:36 np0005485008 nova_compute[192512]: 2025-10-13 16:03:36.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:37 np0005485008 nova_compute[192512]: 2025-10-13 16:03:37.115 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:03:37 np0005485008 nova_compute[192512]: 2025-10-13 16:03:37.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:37 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:37.219 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:03:37 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:37.220 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 12:03:38 np0005485008 nova_compute[192512]: 2025-10-13 16:03:38.304 2 DEBUG nova.network.neutron [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Port f6c12ba8-3bb6-47bf-a285-b860dedb2644 updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Oct 13 12:03:38 np0005485008 nova_compute[192512]: 2025-10-13 16:03:38.306 2 DEBUG nova.compute.manager [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwuqy4qjr',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='448b3b84-87cf-4053-951d-095f41bc7996',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Oct 13 12:03:38 np0005485008 systemd[1]: Starting libvirt proxy daemon...
Oct 13 12:03:38 np0005485008 systemd[1]: Started libvirt proxy daemon.
Oct 13 12:03:38 np0005485008 kernel: tapf6c12ba8-3b: entered promiscuous mode
Oct 13 12:03:38 np0005485008 nova_compute[192512]: 2025-10-13 16:03:38.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:38 np0005485008 NetworkManager[51587]: <info>  [1760371418.6600] manager: (tapf6c12ba8-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/81)
Oct 13 12:03:38 np0005485008 ovn_controller[94758]: 2025-10-13T16:03:38Z|00214|binding|INFO|Claiming lport f6c12ba8-3bb6-47bf-a285-b860dedb2644 for this additional chassis.
Oct 13 12:03:38 np0005485008 ovn_controller[94758]: 2025-10-13T16:03:38Z|00215|binding|INFO|f6c12ba8-3bb6-47bf-a285-b860dedb2644: Claiming fa:16:3e:a8:9d:be 10.100.0.6
Oct 13 12:03:38 np0005485008 nova_compute[192512]: 2025-10-13 16:03:38.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:38 np0005485008 ovn_controller[94758]: 2025-10-13T16:03:38Z|00216|binding|INFO|Setting lport f6c12ba8-3bb6-47bf-a285-b860dedb2644 ovn-installed in OVS
Oct 13 12:03:38 np0005485008 nova_compute[192512]: 2025-10-13 16:03:38.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:38 np0005485008 nova_compute[192512]: 2025-10-13 16:03:38.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:38 np0005485008 systemd-udevd[223101]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 12:03:38 np0005485008 systemd-machined[152551]: New machine qemu-18-instance-00000013.
Oct 13 12:03:38 np0005485008 NetworkManager[51587]: <info>  [1760371418.7171] device (tapf6c12ba8-3b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 12:03:38 np0005485008 NetworkManager[51587]: <info>  [1760371418.7187] device (tapf6c12ba8-3b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 12:03:38 np0005485008 systemd[1]: Started Virtual Machine qemu-18-instance-00000013.
Oct 13 12:03:39 np0005485008 nova_compute[192512]: 2025-10-13 16:03:39.707 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760371419.707218, 448b3b84-87cf-4053-951d-095f41bc7996 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:03:39 np0005485008 nova_compute[192512]: 2025-10-13 16:03:39.709 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] VM Started (Lifecycle Event)#033[00m
Oct 13 12:03:39 np0005485008 nova_compute[192512]: 2025-10-13 16:03:39.775 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:03:40 np0005485008 nova_compute[192512]: 2025-10-13 16:03:40.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:40 np0005485008 nova_compute[192512]: 2025-10-13 16:03:40.428 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760371420.428322, 448b3b84-87cf-4053-951d-095f41bc7996 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:03:40 np0005485008 nova_compute[192512]: 2025-10-13 16:03:40.429 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] VM Resumed (Lifecycle Event)#033[00m
Oct 13 12:03:40 np0005485008 nova_compute[192512]: 2025-10-13 16:03:40.452 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:03:40 np0005485008 nova_compute[192512]: 2025-10-13 16:03:40.455 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 12:03:40 np0005485008 nova_compute[192512]: 2025-10-13 16:03:40.479 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct 13 12:03:41 np0005485008 nova_compute[192512]: 2025-10-13 16:03:41.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:42.222 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:03:43 np0005485008 ovn_controller[94758]: 2025-10-13T16:03:43Z|00217|binding|INFO|Claiming lport f6c12ba8-3bb6-47bf-a285-b860dedb2644 for this chassis.
Oct 13 12:03:43 np0005485008 ovn_controller[94758]: 2025-10-13T16:03:43Z|00218|binding|INFO|f6c12ba8-3bb6-47bf-a285-b860dedb2644: Claiming fa:16:3e:a8:9d:be 10.100.0.6
Oct 13 12:03:43 np0005485008 ovn_controller[94758]: 2025-10-13T16:03:43Z|00219|binding|INFO|Setting lport f6c12ba8-3bb6-47bf-a285-b860dedb2644 up in Southbound
Oct 13 12:03:43 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:43.830 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:9d:be 10.100.0.6'], port_security=['fa:16:3e:a8:9d:be 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '448b3b84-87cf-4053-951d-095f41bc7996', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=f6c12ba8-3bb6-47bf-a285-b860dedb2644) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:03:43 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:43.832 103642 INFO neutron.agent.ovn.metadata.agent [-] Port f6c12ba8-3bb6-47bf-a285-b860dedb2644 in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae bound to our chassis#033[00m
Oct 13 12:03:43 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:43.834 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae#033[00m
Oct 13 12:03:43 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:43.852 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[c099f57c-ec3b-4647-b902-d6ea3142a5c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:03:43 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:43.886 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[6265c151-9680-4393-8d0e-44fcc3200898]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:03:43 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:43.890 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[c6ff2fca-d2a5-447d-a570-1fa351f19cc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:03:43 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:43.921 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[f8b4a9ea-ae53-4d7b-8054-329b077d67ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:03:43 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:43.941 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[9e6e2aac-2916-403b-9324-267d9614de48]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 15, 'tx_packets': 5, 'rx_bytes': 1126, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 15, 'tx_packets': 5, 'rx_bytes': 1126, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489421, 'reachable_time': 20345, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 223135, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:03:43 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:43.963 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[87eb37c9-0a7d-432b-bf78-1bff9954dca3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489435, 'tstamp': 489435}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 223136, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489439, 'tstamp': 489439}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 223136, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:03:43 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:43.966 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:03:43 np0005485008 nova_compute[192512]: 2025-10-13 16:03:43.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:43 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:43.969 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39a43da9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:03:43 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:43.969 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:03:43 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:43.970 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39a43da9-c0, col_values=(('external_ids', {'iface-id': '5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:03:43 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:43.970 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:03:44 np0005485008 nova_compute[192512]: 2025-10-13 16:03:44.330 2 INFO nova.compute.manager [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Post operation of migration started#033[00m
Oct 13 12:03:45 np0005485008 nova_compute[192512]: 2025-10-13 16:03:45.109 2 DEBUG oslo_concurrency.lockutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-448b3b84-87cf-4053-951d-095f41bc7996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:03:45 np0005485008 nova_compute[192512]: 2025-10-13 16:03:45.110 2 DEBUG oslo_concurrency.lockutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-448b3b84-87cf-4053-951d-095f41bc7996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:03:45 np0005485008 nova_compute[192512]: 2025-10-13 16:03:45.110 2 DEBUG nova.network.neutron [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 12:03:45 np0005485008 nova_compute[192512]: 2025-10-13 16:03:45.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:45 np0005485008 podman[223137]: 2025-10-13 16:03:45.759860249 +0000 UTC m=+0.063594772 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc., version=9.6, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.tags=minimal rhel9, name=ubi9-minimal, container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41)
Oct 13 12:03:46 np0005485008 nova_compute[192512]: 2025-10-13 16:03:46.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:47 np0005485008 nova_compute[192512]: 2025-10-13 16:03:47.516 2 DEBUG nova.network.neutron [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Updating instance_info_cache with network_info: [{"id": "f6c12ba8-3bb6-47bf-a285-b860dedb2644", "address": "fa:16:3e:a8:9d:be", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6c12ba8-3b", "ovs_interfaceid": "f6c12ba8-3bb6-47bf-a285-b860dedb2644", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:03:47 np0005485008 nova_compute[192512]: 2025-10-13 16:03:47.546 2 DEBUG oslo_concurrency.lockutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-448b3b84-87cf-4053-951d-095f41bc7996" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:03:47 np0005485008 nova_compute[192512]: 2025-10-13 16:03:47.566 2 DEBUG oslo_concurrency.lockutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:03:47 np0005485008 nova_compute[192512]: 2025-10-13 16:03:47.566 2 DEBUG oslo_concurrency.lockutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:03:47 np0005485008 nova_compute[192512]: 2025-10-13 16:03:47.566 2 DEBUG oslo_concurrency.lockutils [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:03:47 np0005485008 nova_compute[192512]: 2025-10-13 16:03:47.572 2 INFO nova.virt.libvirt.driver [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Oct 13 12:03:47 np0005485008 virtqemud[192082]: Domain id=18 name='instance-00000013' uuid=448b3b84-87cf-4053-951d-095f41bc7996 is tainted: custom-monitor
Oct 13 12:03:48 np0005485008 nova_compute[192512]: 2025-10-13 16:03:48.583 2 INFO nova.virt.libvirt.driver [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Oct 13 12:03:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:03:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:03:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:03:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:03:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:03:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:03:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:03:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:03:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:03:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:03:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:03:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:03:49 np0005485008 nova_compute[192512]: 2025-10-13 16:03:49.590 2 INFO nova.virt.libvirt.driver [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Oct 13 12:03:49 np0005485008 nova_compute[192512]: 2025-10-13 16:03:49.595 2 DEBUG nova.compute.manager [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:03:49 np0005485008 nova_compute[192512]: 2025-10-13 16:03:49.637 2 DEBUG nova.objects.instance [None req-68d52209-cf91-4b0a-a672-484798e48a4c f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 13 12:03:50 np0005485008 nova_compute[192512]: 2025-10-13 16:03:50.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:51 np0005485008 nova_compute[192512]: 2025-10-13 16:03:51.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:52 np0005485008 nova_compute[192512]: 2025-10-13 16:03:52.994 2 DEBUG oslo_concurrency.lockutils [None req-b5beeb0c-2906-4af4-9022-483ec59f561e 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "932125ca-093f-4fd1-b20d-0da836590da3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:03:52 np0005485008 nova_compute[192512]: 2025-10-13 16:03:52.994 2 DEBUG oslo_concurrency.lockutils [None req-b5beeb0c-2906-4af4-9022-483ec59f561e 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "932125ca-093f-4fd1-b20d-0da836590da3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:03:52 np0005485008 nova_compute[192512]: 2025-10-13 16:03:52.995 2 DEBUG oslo_concurrency.lockutils [None req-b5beeb0c-2906-4af4-9022-483ec59f561e 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "932125ca-093f-4fd1-b20d-0da836590da3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:03:52 np0005485008 nova_compute[192512]: 2025-10-13 16:03:52.995 2 DEBUG oslo_concurrency.lockutils [None req-b5beeb0c-2906-4af4-9022-483ec59f561e 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "932125ca-093f-4fd1-b20d-0da836590da3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:03:52 np0005485008 nova_compute[192512]: 2025-10-13 16:03:52.995 2 DEBUG oslo_concurrency.lockutils [None req-b5beeb0c-2906-4af4-9022-483ec59f561e 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "932125ca-093f-4fd1-b20d-0da836590da3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:03:52 np0005485008 nova_compute[192512]: 2025-10-13 16:03:52.997 2 INFO nova.compute.manager [None req-b5beeb0c-2906-4af4-9022-483ec59f561e 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Terminating instance#033[00m
Oct 13 12:03:52 np0005485008 nova_compute[192512]: 2025-10-13 16:03:52.998 2 DEBUG nova.compute.manager [None req-b5beeb0c-2906-4af4-9022-483ec59f561e 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 13 12:03:53 np0005485008 kernel: tapd3b861e4-8a (unregistering): left promiscuous mode
Oct 13 12:03:53 np0005485008 NetworkManager[51587]: <info>  [1760371433.0220] device (tapd3b861e4-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 12:03:53 np0005485008 ovn_controller[94758]: 2025-10-13T16:03:53Z|00220|binding|INFO|Releasing lport d3b861e4-8a7a-4e22-a5b3-75c71bbef549 from this chassis (sb_readonly=0)
Oct 13 12:03:53 np0005485008 ovn_controller[94758]: 2025-10-13T16:03:53Z|00221|binding|INFO|Setting lport d3b861e4-8a7a-4e22-a5b3-75c71bbef549 down in Southbound
Oct 13 12:03:53 np0005485008 ovn_controller[94758]: 2025-10-13T16:03:53Z|00222|binding|INFO|Removing iface tapd3b861e4-8a ovn-installed in OVS
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:53.046 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:6c:0c 10.100.0.9'], port_security=['fa:16:3e:09:6c:0c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '932125ca-093f-4fd1-b20d-0da836590da3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=d3b861e4-8a7a-4e22-a5b3-75c71bbef549) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:03:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:53.049 103642 INFO neutron.agent.ovn.metadata.agent [-] Port d3b861e4-8a7a-4e22-a5b3-75c71bbef549 in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae unbound from our chassis#033[00m
Oct 13 12:03:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:53.050 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae#033[00m
Oct 13 12:03:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:53.070 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[6020feac-7b94-44a9-a783-667189f10489]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:03:53 np0005485008 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000014.scope: Deactivated successfully.
Oct 13 12:03:53 np0005485008 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000014.scope: Consumed 16.376s CPU time.
Oct 13 12:03:53 np0005485008 systemd-machined[152551]: Machine qemu-17-instance-00000014 terminated.
Oct 13 12:03:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:53.104 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[f755b07e-edf9-4072-82fd-1672c540bd19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:03:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:53.108 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[5d7664ad-0cf9-4a61-88c9-f8738f37fb3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:03:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:53.139 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[ab8d3d8a-b6d7-497a-807e-07228ea73dc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:03:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:53.159 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[f89e70fe-60a7-41d0-83a5-e37466e790f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 7, 'rx_bytes': 1756, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489421, 'reachable_time': 20345, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 223169, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:03:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:53.179 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb2488c-7661-4a58-a1ea-524cf3cb7f3d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489435, 'tstamp': 489435}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 223170, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 489439, 'tstamp': 489439}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 223170, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:03:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:53.181 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:53.231 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39a43da9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:03:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:53.232 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:03:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:53.232 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39a43da9-c0, col_values=(('external_ids', {'iface-id': '5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:03:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:53.233 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.278 2 INFO nova.virt.libvirt.driver [-] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Instance destroyed successfully.#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.278 2 DEBUG nova.objects.instance [None req-b5beeb0c-2906-4af4-9022-483ec59f561e 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lazy-loading 'resources' on Instance uuid 932125ca-093f-4fd1-b20d-0da836590da3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.334 2 DEBUG nova.virt.libvirt.vif [None req-b5beeb0c-2906-4af4-9022-483ec59f561e 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T16:02:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-906170487',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-906170487',id=20,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T16:02:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-mzqzl357',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T16:02:37Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=932125ca-093f-4fd1-b20d-0da836590da3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d3b861e4-8a7a-4e22-a5b3-75c71bbef549", "address": "fa:16:3e:09:6c:0c", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b861e4-8a", "ovs_interfaceid": "d3b861e4-8a7a-4e22-a5b3-75c71bbef549", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.335 2 DEBUG nova.network.os_vif_util [None req-b5beeb0c-2906-4af4-9022-483ec59f561e 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converting VIF {"id": "d3b861e4-8a7a-4e22-a5b3-75c71bbef549", "address": "fa:16:3e:09:6c:0c", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3b861e4-8a", "ovs_interfaceid": "d3b861e4-8a7a-4e22-a5b3-75c71bbef549", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.336 2 DEBUG nova.network.os_vif_util [None req-b5beeb0c-2906-4af4-9022-483ec59f561e 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:09:6c:0c,bridge_name='br-int',has_traffic_filtering=True,id=d3b861e4-8a7a-4e22-a5b3-75c71bbef549,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b861e4-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.337 2 DEBUG os_vif [None req-b5beeb0c-2906-4af4-9022-483ec59f561e 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:6c:0c,bridge_name='br-int',has_traffic_filtering=True,id=d3b861e4-8a7a-4e22-a5b3-75c71bbef549,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b861e4-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.340 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3b861e4-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.347 2 INFO os_vif [None req-b5beeb0c-2906-4af4-9022-483ec59f561e 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:6c:0c,bridge_name='br-int',has_traffic_filtering=True,id=d3b861e4-8a7a-4e22-a5b3-75c71bbef549,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3b861e4-8a')#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.348 2 INFO nova.virt.libvirt.driver [None req-b5beeb0c-2906-4af4-9022-483ec59f561e 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Deleting instance files /var/lib/nova/instances/932125ca-093f-4fd1-b20d-0da836590da3_del#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.349 2 INFO nova.virt.libvirt.driver [None req-b5beeb0c-2906-4af4-9022-483ec59f561e 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Deletion of /var/lib/nova/instances/932125ca-093f-4fd1-b20d-0da836590da3_del complete#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.386 2 DEBUG nova.compute.manager [req-7c2b2378-fd4c-4c15-b79e-fbe21978970f req-f04fe14f-c9d1-486c-89ba-0013d2cf32a4 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Received event network-vif-unplugged-d3b861e4-8a7a-4e22-a5b3-75c71bbef549 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.387 2 DEBUG oslo_concurrency.lockutils [req-7c2b2378-fd4c-4c15-b79e-fbe21978970f req-f04fe14f-c9d1-486c-89ba-0013d2cf32a4 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "932125ca-093f-4fd1-b20d-0da836590da3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.387 2 DEBUG oslo_concurrency.lockutils [req-7c2b2378-fd4c-4c15-b79e-fbe21978970f req-f04fe14f-c9d1-486c-89ba-0013d2cf32a4 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "932125ca-093f-4fd1-b20d-0da836590da3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.387 2 DEBUG oslo_concurrency.lockutils [req-7c2b2378-fd4c-4c15-b79e-fbe21978970f req-f04fe14f-c9d1-486c-89ba-0013d2cf32a4 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "932125ca-093f-4fd1-b20d-0da836590da3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.387 2 DEBUG nova.compute.manager [req-7c2b2378-fd4c-4c15-b79e-fbe21978970f req-f04fe14f-c9d1-486c-89ba-0013d2cf32a4 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] No waiting events found dispatching network-vif-unplugged-d3b861e4-8a7a-4e22-a5b3-75c71bbef549 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.388 2 DEBUG nova.compute.manager [req-7c2b2378-fd4c-4c15-b79e-fbe21978970f req-f04fe14f-c9d1-486c-89ba-0013d2cf32a4 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Received event network-vif-unplugged-d3b861e4-8a7a-4e22-a5b3-75c71bbef549 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.462 2 INFO nova.compute.manager [None req-b5beeb0c-2906-4af4-9022-483ec59f561e 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Took 0.46 seconds to destroy the instance on the hypervisor.#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.463 2 DEBUG oslo.service.loopingcall [None req-b5beeb0c-2906-4af4-9022-483ec59f561e 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.464 2 DEBUG nova.compute.manager [-] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 13 12:03:53 np0005485008 nova_compute[192512]: 2025-10-13 16:03:53.464 2 DEBUG nova.network.neutron [-] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 13 12:03:55 np0005485008 nova_compute[192512]: 2025-10-13 16:03:55.376 2 DEBUG nova.network.neutron [-] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:03:55 np0005485008 nova_compute[192512]: 2025-10-13 16:03:55.427 2 INFO nova.compute.manager [-] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Took 1.96 seconds to deallocate network for instance.#033[00m
Oct 13 12:03:55 np0005485008 nova_compute[192512]: 2025-10-13 16:03:55.467 2 DEBUG nova.compute.manager [req-4e87993e-8c2d-4f24-8c12-09c450702302 req-0ce94636-9c0b-49ab-885d-a09d22f3b8d1 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Received event network-vif-deleted-d3b861e4-8a7a-4e22-a5b3-75c71bbef549 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:03:55 np0005485008 nova_compute[192512]: 2025-10-13 16:03:55.498 2 DEBUG oslo_concurrency.lockutils [None req-b5beeb0c-2906-4af4-9022-483ec59f561e 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:03:55 np0005485008 nova_compute[192512]: 2025-10-13 16:03:55.498 2 DEBUG oslo_concurrency.lockutils [None req-b5beeb0c-2906-4af4-9022-483ec59f561e 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:03:55 np0005485008 nova_compute[192512]: 2025-10-13 16:03:55.566 2 DEBUG nova.compute.manager [req-bdf49f14-60a1-4cef-8853-5575de5ab087 req-10e42c31-ff70-4ea2-9cfc-69007b744d1e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Received event network-vif-plugged-d3b861e4-8a7a-4e22-a5b3-75c71bbef549 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:03:55 np0005485008 nova_compute[192512]: 2025-10-13 16:03:55.567 2 DEBUG oslo_concurrency.lockutils [req-bdf49f14-60a1-4cef-8853-5575de5ab087 req-10e42c31-ff70-4ea2-9cfc-69007b744d1e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "932125ca-093f-4fd1-b20d-0da836590da3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:03:55 np0005485008 nova_compute[192512]: 2025-10-13 16:03:55.567 2 DEBUG oslo_concurrency.lockutils [req-bdf49f14-60a1-4cef-8853-5575de5ab087 req-10e42c31-ff70-4ea2-9cfc-69007b744d1e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "932125ca-093f-4fd1-b20d-0da836590da3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:03:55 np0005485008 nova_compute[192512]: 2025-10-13 16:03:55.567 2 DEBUG oslo_concurrency.lockutils [req-bdf49f14-60a1-4cef-8853-5575de5ab087 req-10e42c31-ff70-4ea2-9cfc-69007b744d1e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "932125ca-093f-4fd1-b20d-0da836590da3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:03:55 np0005485008 nova_compute[192512]: 2025-10-13 16:03:55.567 2 DEBUG nova.compute.manager [req-bdf49f14-60a1-4cef-8853-5575de5ab087 req-10e42c31-ff70-4ea2-9cfc-69007b744d1e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] No waiting events found dispatching network-vif-plugged-d3b861e4-8a7a-4e22-a5b3-75c71bbef549 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:03:55 np0005485008 nova_compute[192512]: 2025-10-13 16:03:55.568 2 WARNING nova.compute.manager [req-bdf49f14-60a1-4cef-8853-5575de5ab087 req-10e42c31-ff70-4ea2-9cfc-69007b744d1e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Received unexpected event network-vif-plugged-d3b861e4-8a7a-4e22-a5b3-75c71bbef549 for instance with vm_state deleted and task_state None.#033[00m
Oct 13 12:03:55 np0005485008 nova_compute[192512]: 2025-10-13 16:03:55.626 2 DEBUG nova.compute.provider_tree [None req-b5beeb0c-2906-4af4-9022-483ec59f561e 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:03:55 np0005485008 nova_compute[192512]: 2025-10-13 16:03:55.650 2 DEBUG nova.scheduler.client.report [None req-b5beeb0c-2906-4af4-9022-483ec59f561e 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:03:55 np0005485008 nova_compute[192512]: 2025-10-13 16:03:55.673 2 DEBUG oslo_concurrency.lockutils [None req-b5beeb0c-2906-4af4-9022-483ec59f561e 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:03:55 np0005485008 nova_compute[192512]: 2025-10-13 16:03:55.702 2 INFO nova.scheduler.client.report [None req-b5beeb0c-2906-4af4-9022-483ec59f561e 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Deleted allocations for instance 932125ca-093f-4fd1-b20d-0da836590da3#033[00m
Oct 13 12:03:55 np0005485008 nova_compute[192512]: 2025-10-13 16:03:55.800 2 DEBUG oslo_concurrency.lockutils [None req-b5beeb0c-2906-4af4-9022-483ec59f561e 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "932125ca-093f-4fd1-b20d-0da836590da3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:03:56 np0005485008 nova_compute[192512]: 2025-10-13 16:03:56.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.121 2 DEBUG oslo_concurrency.lockutils [None req-5a176d00-dfd1-49a9-9505-93711b803da2 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "448b3b84-87cf-4053-951d-095f41bc7996" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.122 2 DEBUG oslo_concurrency.lockutils [None req-5a176d00-dfd1-49a9-9505-93711b803da2 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "448b3b84-87cf-4053-951d-095f41bc7996" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.122 2 DEBUG oslo_concurrency.lockutils [None req-5a176d00-dfd1-49a9-9505-93711b803da2 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "448b3b84-87cf-4053-951d-095f41bc7996-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.122 2 DEBUG oslo_concurrency.lockutils [None req-5a176d00-dfd1-49a9-9505-93711b803da2 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "448b3b84-87cf-4053-951d-095f41bc7996-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.123 2 DEBUG oslo_concurrency.lockutils [None req-5a176d00-dfd1-49a9-9505-93711b803da2 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "448b3b84-87cf-4053-951d-095f41bc7996-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.124 2 INFO nova.compute.manager [None req-5a176d00-dfd1-49a9-9505-93711b803da2 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Terminating instance#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.125 2 DEBUG nova.compute.manager [None req-5a176d00-dfd1-49a9-9505-93711b803da2 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 13 12:03:57 np0005485008 kernel: tapf6c12ba8-3b (unregistering): left promiscuous mode
Oct 13 12:03:57 np0005485008 NetworkManager[51587]: <info>  [1760371437.1496] device (tapf6c12ba8-3b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:57 np0005485008 ovn_controller[94758]: 2025-10-13T16:03:57Z|00223|binding|INFO|Releasing lport f6c12ba8-3bb6-47bf-a285-b860dedb2644 from this chassis (sb_readonly=0)
Oct 13 12:03:57 np0005485008 ovn_controller[94758]: 2025-10-13T16:03:57Z|00224|binding|INFO|Setting lport f6c12ba8-3bb6-47bf-a285-b860dedb2644 down in Southbound
Oct 13 12:03:57 np0005485008 ovn_controller[94758]: 2025-10-13T16:03:57Z|00225|binding|INFO|Removing iface tapf6c12ba8-3b ovn-installed in OVS
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:57 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:57.163 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:9d:be 10.100.0.6'], port_security=['fa:16:3e:a8:9d:be 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '448b3b84-87cf-4053-951d-095f41bc7996', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=f6c12ba8-3bb6-47bf-a285-b860dedb2644) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:03:57 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:57.164 103642 INFO neutron.agent.ovn.metadata.agent [-] Port f6c12ba8-3bb6-47bf-a285-b860dedb2644 in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae unbound from our chassis#033[00m
Oct 13 12:03:57 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:57.166 103642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 13 12:03:57 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:57.167 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[7be7e63b-ca50-4732-ad97-4cc5f2f8aef9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:03:57 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:57.167 103642 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae namespace which is not needed anymore#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:57 np0005485008 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000013.scope: Deactivated successfully.
Oct 13 12:03:57 np0005485008 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000013.scope: Consumed 2.122s CPU time.
Oct 13 12:03:57 np0005485008 systemd-machined[152551]: Machine qemu-18-instance-00000013 terminated.
Oct 13 12:03:57 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[222754]: [NOTICE]   (222758) : haproxy version is 2.8.14-c23fe91
Oct 13 12:03:57 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[222754]: [NOTICE]   (222758) : path to executable is /usr/sbin/haproxy
Oct 13 12:03:57 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[222754]: [WARNING]  (222758) : Exiting Master process...
Oct 13 12:03:57 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[222754]: [WARNING]  (222758) : Exiting Master process...
Oct 13 12:03:57 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[222754]: [ALERT]    (222758) : Current worker (222760) exited with code 143 (Terminated)
Oct 13 12:03:57 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[222754]: [WARNING]  (222758) : All workers exited. Exiting... (0)
Oct 13 12:03:57 np0005485008 systemd[1]: libpod-bb27b8b98c01b7faccf37977d87f062e7687da201b6569fb0c0da913a778d124.scope: Deactivated successfully.
Oct 13 12:03:57 np0005485008 podman[223213]: 2025-10-13 16:03:57.315267063 +0000 UTC m=+0.051627304 container died bb27b8b98c01b7faccf37977d87f062e7687da201b6569fb0c0da913a778d124 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 13 12:03:57 np0005485008 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bb27b8b98c01b7faccf37977d87f062e7687da201b6569fb0c0da913a778d124-userdata-shm.mount: Deactivated successfully.
Oct 13 12:03:57 np0005485008 systemd[1]: var-lib-containers-storage-overlay-05405805cdc3f07a4969ac036a712cb6ff08cd2e958d3842742b59d3e4acb1eb-merged.mount: Deactivated successfully.
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.389 2 INFO nova.virt.libvirt.driver [-] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Instance destroyed successfully.#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.390 2 DEBUG nova.objects.instance [None req-5a176d00-dfd1-49a9-9505-93711b803da2 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lazy-loading 'resources' on Instance uuid 448b3b84-87cf-4053-951d-095f41bc7996 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.441 2 DEBUG nova.virt.libvirt.vif [None req-5a176d00-dfd1-49a9-9505-93711b803da2 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-13T16:02:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-690751275',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-690751275',id=19,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T16:02:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-sed9za92',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',clean_attempts='1',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T16:03:49Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=448b3b84-87cf-4053-951d-095f41bc7996,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f6c12ba8-3bb6-47bf-a285-b860dedb2644", "address": "fa:16:3e:a8:9d:be", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6c12ba8-3b", "ovs_interfaceid": "f6c12ba8-3bb6-47bf-a285-b860dedb2644", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.442 2 DEBUG nova.network.os_vif_util [None req-5a176d00-dfd1-49a9-9505-93711b803da2 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converting VIF {"id": "f6c12ba8-3bb6-47bf-a285-b860dedb2644", "address": "fa:16:3e:a8:9d:be", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf6c12ba8-3b", "ovs_interfaceid": "f6c12ba8-3bb6-47bf-a285-b860dedb2644", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.442 2 DEBUG nova.network.os_vif_util [None req-5a176d00-dfd1-49a9-9505-93711b803da2 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:9d:be,bridge_name='br-int',has_traffic_filtering=True,id=f6c12ba8-3bb6-47bf-a285-b860dedb2644,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6c12ba8-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.443 2 DEBUG os_vif [None req-5a176d00-dfd1-49a9-9505-93711b803da2 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:9d:be,bridge_name='br-int',has_traffic_filtering=True,id=f6c12ba8-3bb6-47bf-a285-b860dedb2644,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6c12ba8-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.445 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6c12ba8-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.452 2 INFO os_vif [None req-5a176d00-dfd1-49a9-9505-93711b803da2 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:9d:be,bridge_name='br-int',has_traffic_filtering=True,id=f6c12ba8-3bb6-47bf-a285-b860dedb2644,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf6c12ba8-3b')#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.452 2 INFO nova.virt.libvirt.driver [None req-5a176d00-dfd1-49a9-9505-93711b803da2 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Deleting instance files /var/lib/nova/instances/448b3b84-87cf-4053-951d-095f41bc7996_del#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.453 2 INFO nova.virt.libvirt.driver [None req-5a176d00-dfd1-49a9-9505-93711b803da2 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Deletion of /var/lib/nova/instances/448b3b84-87cf-4053-951d-095f41bc7996_del complete#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.518 2 INFO nova.compute.manager [None req-5a176d00-dfd1-49a9-9505-93711b803da2 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Took 0.39 seconds to destroy the instance on the hypervisor.#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.519 2 DEBUG oslo.service.loopingcall [None req-5a176d00-dfd1-49a9-9505-93711b803da2 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.519 2 DEBUG nova.compute.manager [-] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.519 2 DEBUG nova.network.neutron [-] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 13 12:03:57 np0005485008 podman[223213]: 2025-10-13 16:03:57.540677711 +0000 UTC m=+0.277037952 container cleanup bb27b8b98c01b7faccf37977d87f062e7687da201b6569fb0c0da913a778d124 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 12:03:57 np0005485008 systemd[1]: libpod-conmon-bb27b8b98c01b7faccf37977d87f062e7687da201b6569fb0c0da913a778d124.scope: Deactivated successfully.
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.709 2 DEBUG nova.compute.manager [req-9a423dea-fdcb-4775-9a2f-05041693a49b req-8262aeb2-9f7a-447d-bc09-3c7fa1d8bd80 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Received event network-vif-unplugged-f6c12ba8-3bb6-47bf-a285-b860dedb2644 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.709 2 DEBUG oslo_concurrency.lockutils [req-9a423dea-fdcb-4775-9a2f-05041693a49b req-8262aeb2-9f7a-447d-bc09-3c7fa1d8bd80 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "448b3b84-87cf-4053-951d-095f41bc7996-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.709 2 DEBUG oslo_concurrency.lockutils [req-9a423dea-fdcb-4775-9a2f-05041693a49b req-8262aeb2-9f7a-447d-bc09-3c7fa1d8bd80 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "448b3b84-87cf-4053-951d-095f41bc7996-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.709 2 DEBUG oslo_concurrency.lockutils [req-9a423dea-fdcb-4775-9a2f-05041693a49b req-8262aeb2-9f7a-447d-bc09-3c7fa1d8bd80 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "448b3b84-87cf-4053-951d-095f41bc7996-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.710 2 DEBUG nova.compute.manager [req-9a423dea-fdcb-4775-9a2f-05041693a49b req-8262aeb2-9f7a-447d-bc09-3c7fa1d8bd80 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] No waiting events found dispatching network-vif-unplugged-f6c12ba8-3bb6-47bf-a285-b860dedb2644 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.710 2 DEBUG nova.compute.manager [req-9a423dea-fdcb-4775-9a2f-05041693a49b req-8262aeb2-9f7a-447d-bc09-3c7fa1d8bd80 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Received event network-vif-unplugged-f6c12ba8-3bb6-47bf-a285-b860dedb2644 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 12:03:57 np0005485008 podman[223257]: 2025-10-13 16:03:57.777580314 +0000 UTC m=+0.213733531 container remove bb27b8b98c01b7faccf37977d87f062e7687da201b6569fb0c0da913a778d124 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 12:03:57 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:57.783 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[51c5e84c-7f0d-40ef-904d-65e49d5a1e33]: (4, ('Mon Oct 13 04:03:57 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae (bb27b8b98c01b7faccf37977d87f062e7687da201b6569fb0c0da913a778d124)\nbb27b8b98c01b7faccf37977d87f062e7687da201b6569fb0c0da913a778d124\nMon Oct 13 04:03:57 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae (bb27b8b98c01b7faccf37977d87f062e7687da201b6569fb0c0da913a778d124)\nbb27b8b98c01b7faccf37977d87f062e7687da201b6569fb0c0da913a778d124\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:03:57 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:57.785 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[1e47d1d1-4a32-43ab-bc31-2d16fbcf4594]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:03:57 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:57.787 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:03:57 np0005485008 kernel: tap39a43da9-c0: left promiscuous mode
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:57 np0005485008 nova_compute[192512]: 2025-10-13 16:03:57.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:03:57 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:57.803 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[3a531b08-87bf-4b19-ba1b-097c9040495f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:03:57 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:57.829 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[3c36c268-b1b0-49b6-8d75-bf6100ab7773]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:03:57 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:57.831 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[9b439c43-7d5b-4d0a-8c30-655c454c6d0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:03:57 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:57.850 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[25c4a1c7-dad0-4bf9-97f6-13ddf0e1f3a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 489413, 'reachable_time': 38871, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 223271, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:03:57 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:57.853 103757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 13 12:03:57 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:03:57.853 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc865f9-2989-40bd-9f9e-6cd39399bc57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:03:57 np0005485008 systemd[1]: run-netns-ovnmeta\x2d39a43da9\x2dcf4c\x2d4fe3\x2dab73\x2dbf8705320dae.mount: Deactivated successfully.
Oct 13 12:03:59 np0005485008 nova_compute[192512]: 2025-10-13 16:03:59.923 2 DEBUG nova.compute.manager [req-17c031b5-7baa-4929-8f2b-4cbbb4c52337 req-905196ee-0d89-405d-b178-cd139bc4de04 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Received event network-vif-plugged-f6c12ba8-3bb6-47bf-a285-b860dedb2644 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:03:59 np0005485008 nova_compute[192512]: 2025-10-13 16:03:59.923 2 DEBUG oslo_concurrency.lockutils [req-17c031b5-7baa-4929-8f2b-4cbbb4c52337 req-905196ee-0d89-405d-b178-cd139bc4de04 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "448b3b84-87cf-4053-951d-095f41bc7996-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:03:59 np0005485008 nova_compute[192512]: 2025-10-13 16:03:59.923 2 DEBUG oslo_concurrency.lockutils [req-17c031b5-7baa-4929-8f2b-4cbbb4c52337 req-905196ee-0d89-405d-b178-cd139bc4de04 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "448b3b84-87cf-4053-951d-095f41bc7996-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:03:59 np0005485008 nova_compute[192512]: 2025-10-13 16:03:59.924 2 DEBUG oslo_concurrency.lockutils [req-17c031b5-7baa-4929-8f2b-4cbbb4c52337 req-905196ee-0d89-405d-b178-cd139bc4de04 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "448b3b84-87cf-4053-951d-095f41bc7996-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:03:59 np0005485008 nova_compute[192512]: 2025-10-13 16:03:59.924 2 DEBUG nova.compute.manager [req-17c031b5-7baa-4929-8f2b-4cbbb4c52337 req-905196ee-0d89-405d-b178-cd139bc4de04 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] No waiting events found dispatching network-vif-plugged-f6c12ba8-3bb6-47bf-a285-b860dedb2644 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:03:59 np0005485008 nova_compute[192512]: 2025-10-13 16:03:59.924 2 WARNING nova.compute.manager [req-17c031b5-7baa-4929-8f2b-4cbbb4c52337 req-905196ee-0d89-405d-b178-cd139bc4de04 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Received unexpected event network-vif-plugged-f6c12ba8-3bb6-47bf-a285-b860dedb2644 for instance with vm_state active and task_state deleting.#033[00m
Oct 13 12:03:59 np0005485008 nova_compute[192512]: 2025-10-13 16:03:59.938 2 DEBUG nova.network.neutron [-] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:03:59 np0005485008 nova_compute[192512]: 2025-10-13 16:03:59.959 2 INFO nova.compute.manager [-] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Took 2.44 seconds to deallocate network for instance.#033[00m
Oct 13 12:04:00 np0005485008 nova_compute[192512]: 2025-10-13 16:04:00.013 2 DEBUG oslo_concurrency.lockutils [None req-5a176d00-dfd1-49a9-9505-93711b803da2 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:04:00 np0005485008 nova_compute[192512]: 2025-10-13 16:04:00.014 2 DEBUG oslo_concurrency.lockutils [None req-5a176d00-dfd1-49a9-9505-93711b803da2 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:04:00 np0005485008 nova_compute[192512]: 2025-10-13 16:04:00.018 2 DEBUG oslo_concurrency.lockutils [None req-5a176d00-dfd1-49a9-9505-93711b803da2 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:04:00 np0005485008 nova_compute[192512]: 2025-10-13 16:04:00.072 2 DEBUG nova.compute.manager [req-04abf742-4c50-462a-b981-6ea47c837c00 req-5c53ae84-8d98-47d0-b2c5-931b59d0ff6d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Received event network-vif-deleted-f6c12ba8-3bb6-47bf-a285-b860dedb2644 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:04:00 np0005485008 nova_compute[192512]: 2025-10-13 16:04:00.074 2 INFO nova.scheduler.client.report [None req-5a176d00-dfd1-49a9-9505-93711b803da2 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Deleted allocations for instance 448b3b84-87cf-4053-951d-095f41bc7996#033[00m
Oct 13 12:04:00 np0005485008 nova_compute[192512]: 2025-10-13 16:04:00.173 2 DEBUG oslo_concurrency.lockutils [None req-5a176d00-dfd1-49a9-9505-93711b803da2 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "448b3b84-87cf-4053-951d-095f41bc7996" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:04:01 np0005485008 nova_compute[192512]: 2025-10-13 16:04:01.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:04:02 np0005485008 nova_compute[192512]: 2025-10-13 16:04:02.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:04:02 np0005485008 podman[223275]: 2025-10-13 16:04:02.775300214 +0000 UTC m=+0.067770605 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 12:04:02 np0005485008 podman[223274]: 2025-10-13 16:04:02.782343287 +0000 UTC m=+0.079120764 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 12:04:02 np0005485008 podman[223276]: 2025-10-13 16:04:02.795555314 +0000 UTC m=+0.089244233 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 12:04:02 np0005485008 podman[223277]: 2025-10-13 16:04:02.800568883 +0000 UTC m=+0.091984130 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 12:04:02 np0005485008 podman[223278]: 2025-10-13 16:04:02.840538277 +0000 UTC m=+0.126437020 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 13 12:04:05 np0005485008 podman[202884]: time="2025-10-13T16:04:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:04:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:04:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:04:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:04:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3003 "" "Go-http-client/1.1"
Oct 13 12:04:06 np0005485008 nova_compute[192512]: 2025-10-13 16:04:06.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:04:07 np0005485008 nova_compute[192512]: 2025-10-13 16:04:07.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:04:08 np0005485008 nova_compute[192512]: 2025-10-13 16:04:08.278 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760371433.276335, 932125ca-093f-4fd1-b20d-0da836590da3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:04:08 np0005485008 nova_compute[192512]: 2025-10-13 16:04:08.278 2 INFO nova.compute.manager [-] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] VM Stopped (Lifecycle Event)#033[00m
Oct 13 12:04:08 np0005485008 nova_compute[192512]: 2025-10-13 16:04:08.309 2 DEBUG nova.compute.manager [None req-b7511695-1418-4a03-a2b6-66c62238a2c9 - - - - - -] [instance: 932125ca-093f-4fd1-b20d-0da836590da3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:04:11 np0005485008 nova_compute[192512]: 2025-10-13 16:04:11.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:04:12 np0005485008 nova_compute[192512]: 2025-10-13 16:04:12.389 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760371437.3868628, 448b3b84-87cf-4053-951d-095f41bc7996 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:04:12 np0005485008 nova_compute[192512]: 2025-10-13 16:04:12.389 2 INFO nova.compute.manager [-] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] VM Stopped (Lifecycle Event)#033[00m
Oct 13 12:04:12 np0005485008 nova_compute[192512]: 2025-10-13 16:04:12.421 2 DEBUG nova.compute.manager [None req-a10866e0-4c00-4121-b932-218b2b19bcf4 - - - - - -] [instance: 448b3b84-87cf-4053-951d-095f41bc7996] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:04:12 np0005485008 nova_compute[192512]: 2025-10-13 16:04:12.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:04:15 np0005485008 nova_compute[192512]: 2025-10-13 16:04:15.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:04:15 np0005485008 nova_compute[192512]: 2025-10-13 16:04:15.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:04:16 np0005485008 nova_compute[192512]: 2025-10-13 16:04:16.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:04:16 np0005485008 podman[223373]: 2025-10-13 16:04:16.750017475 +0000 UTC m=+0.052908806 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Oct 13 12:04:17 np0005485008 nova_compute[192512]: 2025-10-13 16:04:17.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:04:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:04:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:04:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:04:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:04:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:04:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:04:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:04:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:04:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:04:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:04:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:04:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:04:19 np0005485008 nova_compute[192512]: 2025-10-13 16:04:19.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:04:21 np0005485008 nova_compute[192512]: 2025-10-13 16:04:21.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:04:21 np0005485008 nova_compute[192512]: 2025-10-13 16:04:21.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:04:21 np0005485008 nova_compute[192512]: 2025-10-13 16:04:21.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:04:21 np0005485008 nova_compute[192512]: 2025-10-13 16:04:21.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:04:22 np0005485008 nova_compute[192512]: 2025-10-13 16:04:22.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:04:24 np0005485008 nova_compute[192512]: 2025-10-13 16:04:24.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:04:24 np0005485008 nova_compute[192512]: 2025-10-13 16:04:24.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:04:24 np0005485008 nova_compute[192512]: 2025-10-13 16:04:24.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:04:24 np0005485008 nova_compute[192512]: 2025-10-13 16:04:24.441 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:04:24 np0005485008 nova_compute[192512]: 2025-10-13 16:04:24.441 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:04:24 np0005485008 nova_compute[192512]: 2025-10-13 16:04:24.442 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:04:24 np0005485008 nova_compute[192512]: 2025-10-13 16:04:24.467 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:04:24 np0005485008 nova_compute[192512]: 2025-10-13 16:04:24.468 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:04:24 np0005485008 nova_compute[192512]: 2025-10-13 16:04:24.468 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:04:24 np0005485008 nova_compute[192512]: 2025-10-13 16:04:24.468 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:04:24 np0005485008 nova_compute[192512]: 2025-10-13 16:04:24.625 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:04:24 np0005485008 nova_compute[192512]: 2025-10-13 16:04:24.627 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5864MB free_disk=73.46556854248047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:04:24 np0005485008 nova_compute[192512]: 2025-10-13 16:04:24.627 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:04:24 np0005485008 nova_compute[192512]: 2025-10-13 16:04:24.627 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:04:24 np0005485008 nova_compute[192512]: 2025-10-13 16:04:24.689 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:04:24 np0005485008 nova_compute[192512]: 2025-10-13 16:04:24.690 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:04:24 np0005485008 nova_compute[192512]: 2025-10-13 16:04:24.709 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:04:24 np0005485008 nova_compute[192512]: 2025-10-13 16:04:24.732 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:04:24 np0005485008 nova_compute[192512]: 2025-10-13 16:04:24.752 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:04:24 np0005485008 nova_compute[192512]: 2025-10-13 16:04:24.753 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:04:25 np0005485008 nova_compute[192512]: 2025-10-13 16:04:25.739 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:04:26 np0005485008 nova_compute[192512]: 2025-10-13 16:04:26.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:04:27 np0005485008 nova_compute[192512]: 2025-10-13 16:04:27.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:04:29 np0005485008 ovn_controller[94758]: 2025-10-13T16:04:29Z|00226|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Oct 13 12:04:31 np0005485008 nova_compute[192512]: 2025-10-13 16:04:31.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:04:32 np0005485008 nova_compute[192512]: 2025-10-13 16:04:32.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:04:33 np0005485008 podman[223397]: 2025-10-13 16:04:33.756785556 +0000 UTC m=+0.057939994 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 13 12:04:33 np0005485008 podman[223398]: 2025-10-13 16:04:33.765866214 +0000 UTC m=+0.065224315 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 12:04:33 np0005485008 podman[223396]: 2025-10-13 16:04:33.768893448 +0000 UTC m=+0.073458503 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 12:04:33 np0005485008 podman[223399]: 2025-10-13 16:04:33.807075497 +0000 UTC m=+0.102107711 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 12:04:33 np0005485008 podman[223401]: 2025-10-13 16:04:33.807431088 +0000 UTC m=+0.092481366 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller)
Oct 13 12:04:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:04:33.968 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:04:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:04:33.969 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:04:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:04:33.969 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:04:35 np0005485008 podman[202884]: time="2025-10-13T16:04:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:04:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:04:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:04:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:04:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3008 "" "Go-http-client/1.1"
Oct 13 12:04:36 np0005485008 nova_compute[192512]: 2025-10-13 16:04:36.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:04:37 np0005485008 nova_compute[192512]: 2025-10-13 16:04:37.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:04:41 np0005485008 nova_compute[192512]: 2025-10-13 16:04:41.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:04:42 np0005485008 nova_compute[192512]: 2025-10-13 16:04:42.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:04:46 np0005485008 nova_compute[192512]: 2025-10-13 16:04:46.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:04:47 np0005485008 nova_compute[192512]: 2025-10-13 16:04:47.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:04:47 np0005485008 podman[223496]: 2025-10-13 16:04:47.754801702 +0000 UTC m=+0.060653100 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 13 12:04:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:04:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:04:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:04:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:04:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:04:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:04:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:04:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:04:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:04:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:04:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:04:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:04:51 np0005485008 nova_compute[192512]: 2025-10-13 16:04:51.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:04:52 np0005485008 nova_compute[192512]: 2025-10-13 16:04:52.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:04:56 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:04:56.232 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:04:56 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:04:56.233 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 12:04:56 np0005485008 nova_compute[192512]: 2025-10-13 16:04:56.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:04:56 np0005485008 nova_compute[192512]: 2025-10-13 16:04:56.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:04:57 np0005485008 nova_compute[192512]: 2025-10-13 16:04:57.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:04:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:04:58.236 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:05:01 np0005485008 nova_compute[192512]: 2025-10-13 16:05:01.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:05:02 np0005485008 nova_compute[192512]: 2025-10-13 16:05:02.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:05:04 np0005485008 podman[223519]: 2025-10-13 16:05:04.766073924 +0000 UTC m=+0.053746370 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 13 12:05:04 np0005485008 podman[223520]: 2025-10-13 16:05:04.770092532 +0000 UTC m=+0.053590746 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 12:05:04 np0005485008 podman[223518]: 2025-10-13 16:05:04.797445917 +0000 UTC m=+0.088950964 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid)
Oct 13 12:05:04 np0005485008 podman[223517]: 2025-10-13 16:05:04.801279708 +0000 UTC m=+0.095417919 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 12:05:04 np0005485008 podman[223524]: 2025-10-13 16:05:04.806356398 +0000 UTC m=+0.089667916 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 12:05:05 np0005485008 podman[202884]: time="2025-10-13T16:05:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:05:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:05:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:05:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:05:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3011 "" "Go-http-client/1.1"
Oct 13 12:05:06 np0005485008 nova_compute[192512]: 2025-10-13 16:05:06.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:05:07 np0005485008 nova_compute[192512]: 2025-10-13 16:05:07.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:05:11 np0005485008 nova_compute[192512]: 2025-10-13 16:05:11.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:05:12 np0005485008 nova_compute[192512]: 2025-10-13 16:05:12.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:05:15 np0005485008 nova_compute[192512]: 2025-10-13 16:05:15.772 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:05:16 np0005485008 nova_compute[192512]: 2025-10-13 16:05:16.430 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:05:16 np0005485008 nova_compute[192512]: 2025-10-13 16:05:16.431 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:05:16 np0005485008 nova_compute[192512]: 2025-10-13 16:05:16.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:05:17 np0005485008 nova_compute[192512]: 2025-10-13 16:05:17.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:05:18 np0005485008 podman[223617]: 2025-10-13 16:05:18.756302204 +0000 UTC m=+0.061670130 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1755695350, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6)
Oct 13 12:05:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:05:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:05:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:05:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:05:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:05:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:05:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:05:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:05:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:05:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:05:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:05:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:05:19 np0005485008 nova_compute[192512]: 2025-10-13 16:05:19.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:05:21 np0005485008 nova_compute[192512]: 2025-10-13 16:05:21.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:05:21 np0005485008 nova_compute[192512]: 2025-10-13 16:05:21.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:05:22 np0005485008 nova_compute[192512]: 2025-10-13 16:05:22.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:05:22 np0005485008 nova_compute[192512]: 2025-10-13 16:05:22.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:05:22 np0005485008 nova_compute[192512]: 2025-10-13 16:05:22.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:05:23 np0005485008 nova_compute[192512]: 2025-10-13 16:05:23.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:05:25 np0005485008 nova_compute[192512]: 2025-10-13 16:05:25.442 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:05:25 np0005485008 nova_compute[192512]: 2025-10-13 16:05:25.442 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:05:25 np0005485008 nova_compute[192512]: 2025-10-13 16:05:25.443 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:05:25 np0005485008 nova_compute[192512]: 2025-10-13 16:05:25.457 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:05:25 np0005485008 nova_compute[192512]: 2025-10-13 16:05:25.458 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:05:26 np0005485008 nova_compute[192512]: 2025-10-13 16:05:26.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:05:26 np0005485008 nova_compute[192512]: 2025-10-13 16:05:26.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:05:26 np0005485008 nova_compute[192512]: 2025-10-13 16:05:26.450 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:05:26 np0005485008 nova_compute[192512]: 2025-10-13 16:05:26.451 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:05:26 np0005485008 nova_compute[192512]: 2025-10-13 16:05:26.451 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:05:26 np0005485008 nova_compute[192512]: 2025-10-13 16:05:26.451 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:05:26 np0005485008 nova_compute[192512]: 2025-10-13 16:05:26.618 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:05:26 np0005485008 nova_compute[192512]: 2025-10-13 16:05:26.619 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5860MB free_disk=73.46556854248047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:05:26 np0005485008 nova_compute[192512]: 2025-10-13 16:05:26.619 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:05:26 np0005485008 nova_compute[192512]: 2025-10-13 16:05:26.619 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:05:26 np0005485008 nova_compute[192512]: 2025-10-13 16:05:26.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:05:26 np0005485008 nova_compute[192512]: 2025-10-13 16:05:26.989 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:05:26 np0005485008 nova_compute[192512]: 2025-10-13 16:05:26.990 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:05:27 np0005485008 nova_compute[192512]: 2025-10-13 16:05:27.024 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:05:27 np0005485008 nova_compute[192512]: 2025-10-13 16:05:27.046 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:05:27 np0005485008 nova_compute[192512]: 2025-10-13 16:05:27.048 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:05:27 np0005485008 nova_compute[192512]: 2025-10-13 16:05:27.048 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.429s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:05:27 np0005485008 nova_compute[192512]: 2025-10-13 16:05:27.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:05:31 np0005485008 nova_compute[192512]: 2025-10-13 16:05:31.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:05:32 np0005485008 nova_compute[192512]: 2025-10-13 16:05:32.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:05:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:05:33.970 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:05:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:05:33.970 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:05:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:05:33.970 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:05:35 np0005485008 nova_compute[192512]: 2025-10-13 16:05:35.045 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:05:35 np0005485008 podman[202884]: time="2025-10-13T16:05:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:05:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:05:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:05:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:05:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3007 "" "Go-http-client/1.1"
Oct 13 12:05:35 np0005485008 podman[223639]: 2025-10-13 16:05:35.759347652 +0000 UTC m=+0.064744679 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 12:05:35 np0005485008 podman[223642]: 2025-10-13 16:05:35.786390937 +0000 UTC m=+0.068149077 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 12:05:35 np0005485008 podman[223641]: 2025-10-13 16:05:35.788605256 +0000 UTC m=+0.086849557 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 12:05:35 np0005485008 podman[223640]: 2025-10-13 16:05:35.795747013 +0000 UTC m=+0.087701515 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 13 12:05:35 np0005485008 podman[223648]: 2025-10-13 16:05:35.821747024 +0000 UTC m=+0.108411259 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 13 12:05:36 np0005485008 nova_compute[192512]: 2025-10-13 16:05:36.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:05:37 np0005485008 nova_compute[192512]: 2025-10-13 16:05:37.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:05:40 np0005485008 ovn_controller[94758]: 2025-10-13T16:05:40Z|00227|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Oct 13 12:05:40 np0005485008 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 13 12:05:41 np0005485008 nova_compute[192512]: 2025-10-13 16:05:41.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:05:41 np0005485008 nova_compute[192512]: 2025-10-13 16:05:41.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 13 12:05:41 np0005485008 nova_compute[192512]: 2025-10-13 16:05:41.449 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 13 12:05:41 np0005485008 nova_compute[192512]: 2025-10-13 16:05:41.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:05:42 np0005485008 nova_compute[192512]: 2025-10-13 16:05:42.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:05:44 np0005485008 nova_compute[192512]: 2025-10-13 16:05:44.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:05:44 np0005485008 nova_compute[192512]: 2025-10-13 16:05:44.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 13 12:05:44 np0005485008 nova_compute[192512]: 2025-10-13 16:05:44.786 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:05:46 np0005485008 nova_compute[192512]: 2025-10-13 16:05:46.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:05:47 np0005485008 nova_compute[192512]: 2025-10-13 16:05:47.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:05:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:05:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:05:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:05:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:05:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:05:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:05:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:05:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:05:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:05:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:05:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:05:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:05:49 np0005485008 podman[223744]: 2025-10-13 16:05:49.750863292 +0000 UTC m=+0.055877588 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, config_id=edpm, vendor=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 12:05:51 np0005485008 nova_compute[192512]: 2025-10-13 16:05:51.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:05:52 np0005485008 nova_compute[192512]: 2025-10-13 16:05:52.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:05:56 np0005485008 nova_compute[192512]: 2025-10-13 16:05:56.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:05:57 np0005485008 nova_compute[192512]: 2025-10-13 16:05:57.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:01 np0005485008 nova_compute[192512]: 2025-10-13 16:06:01.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:02 np0005485008 nova_compute[192512]: 2025-10-13 16:06:02.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:05 np0005485008 podman[202884]: time="2025-10-13T16:06:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:06:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:06:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:06:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:06:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3000 "" "Go-http-client/1.1"
Oct 13 12:06:06 np0005485008 nova_compute[192512]: 2025-10-13 16:06:06.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:06 np0005485008 podman[223774]: 2025-10-13 16:06:06.773680142 +0000 UTC m=+0.063007533 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 12:06:06 np0005485008 podman[223766]: 2025-10-13 16:06:06.773838797 +0000 UTC m=+0.073983180 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct 13 12:06:06 np0005485008 podman[223765]: 2025-10-13 16:06:06.796391571 +0000 UTC m=+0.100749838 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3)
Oct 13 12:06:06 np0005485008 podman[223767]: 2025-10-13 16:06:06.803996362 +0000 UTC m=+0.096649188 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct 13 12:06:06 np0005485008 podman[223780]: 2025-10-13 16:06:06.823991723 +0000 UTC m=+0.101642645 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 12:06:07 np0005485008 nova_compute[192512]: 2025-10-13 16:06:07.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:11 np0005485008 nova_compute[192512]: 2025-10-13 16:06:11.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:12 np0005485008 nova_compute[192512]: 2025-10-13 16:06:12.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:16 np0005485008 nova_compute[192512]: 2025-10-13 16:06:16.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:17 np0005485008 nova_compute[192512]: 2025-10-13 16:06:17.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:18 np0005485008 nova_compute[192512]: 2025-10-13 16:06:18.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:06:18 np0005485008 nova_compute[192512]: 2025-10-13 16:06:18.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:06:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:06:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:06:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:06:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:06:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:06:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:06:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:06:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:06:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:06:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:06:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:06:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:06:20 np0005485008 nova_compute[192512]: 2025-10-13 16:06:20.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:06:20 np0005485008 podman[223867]: 2025-10-13 16:06:20.761420055 +0000 UTC m=+0.061438924 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, version=9.6)
Oct 13 12:06:21 np0005485008 nova_compute[192512]: 2025-10-13 16:06:21.425 2 DEBUG nova.virt.libvirt.driver [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Creating tmpfile /var/lib/nova/instances/tmpq4czos32 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Oct 13 12:06:21 np0005485008 nova_compute[192512]: 2025-10-13 16:06:21.426 2 DEBUG nova.compute.manager [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpq4czos32',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Oct 13 12:06:21 np0005485008 nova_compute[192512]: 2025-10-13 16:06:21.674 2 DEBUG nova.virt.libvirt.driver [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Creating tmpfile /var/lib/nova/instances/tmp2sj039pg to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Oct 13 12:06:21 np0005485008 nova_compute[192512]: 2025-10-13 16:06:21.675 2 DEBUG nova.compute.manager [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp2sj039pg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Oct 13 12:06:21 np0005485008 nova_compute[192512]: 2025-10-13 16:06:21.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:22 np0005485008 nova_compute[192512]: 2025-10-13 16:06:22.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:06:22 np0005485008 nova_compute[192512]: 2025-10-13 16:06:22.701 2 DEBUG nova.compute.manager [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpq4czos32',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='0a1c6209-ffc3-440e-8934-3aa835a3a5f3',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Oct 13 12:06:22 np0005485008 nova_compute[192512]: 2025-10-13 16:06:22.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:22 np0005485008 nova_compute[192512]: 2025-10-13 16:06:22.735 2 DEBUG oslo_concurrency.lockutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-0a1c6209-ffc3-440e-8934-3aa835a3a5f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:06:22 np0005485008 nova_compute[192512]: 2025-10-13 16:06:22.735 2 DEBUG oslo_concurrency.lockutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-0a1c6209-ffc3-440e-8934-3aa835a3a5f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:06:22 np0005485008 nova_compute[192512]: 2025-10-13 16:06:22.736 2 DEBUG nova.network.neutron [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 12:06:23 np0005485008 nova_compute[192512]: 2025-10-13 16:06:23.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:06:23 np0005485008 nova_compute[192512]: 2025-10-13 16:06:23.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.542 2 DEBUG nova.network.neutron [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Updating instance_info_cache with network_info: [{"id": "3b1ec4ae-579f-4e07-8443-31db62f8caff", "address": "fa:16:3e:82:bb:95", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b1ec4ae-57", "ovs_interfaceid": "3b1ec4ae-579f-4e07-8443-31db62f8caff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.577 2 DEBUG oslo_concurrency.lockutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-0a1c6209-ffc3-440e-8934-3aa835a3a5f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.579 2 DEBUG nova.virt.libvirt.driver [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpq4czos32',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='0a1c6209-ffc3-440e-8934-3aa835a3a5f3',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.580 2 DEBUG nova.virt.libvirt.driver [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Creating instance directory: /var/lib/nova/instances/0a1c6209-ffc3-440e-8934-3aa835a3a5f3 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.580 2 DEBUG nova.virt.libvirt.driver [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Creating disk.info with the contents: {'/var/lib/nova/instances/0a1c6209-ffc3-440e-8934-3aa835a3a5f3/disk': 'qcow2', '/var/lib/nova/instances/0a1c6209-ffc3-440e-8934-3aa835a3a5f3/disk.config': 'raw'} pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10854#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.581 2 DEBUG nova.virt.libvirt.driver [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10864#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.582 2 DEBUG nova.objects.instance [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0a1c6209-ffc3-440e-8934-3aa835a3a5f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.608 2 DEBUG oslo_concurrency.processutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.668 2 DEBUG oslo_concurrency.processutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.669 2 DEBUG oslo_concurrency.lockutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.670 2 DEBUG oslo_concurrency.lockutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.687 2 DEBUG oslo_concurrency.processutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.745 2 DEBUG oslo_concurrency.processutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.747 2 DEBUG oslo_concurrency.processutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/0a1c6209-ffc3-440e-8934-3aa835a3a5f3/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.786 2 DEBUG oslo_concurrency.processutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/0a1c6209-ffc3-440e-8934-3aa835a3a5f3/disk 1073741824" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.787 2 DEBUG oslo_concurrency.lockutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.788 2 DEBUG oslo_concurrency.processutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.847 2 DEBUG oslo_concurrency.processutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.849 2 DEBUG nova.virt.disk.api [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Checking if we can resize image /var/lib/nova/instances/0a1c6209-ffc3-440e-8934-3aa835a3a5f3/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.850 2 DEBUG oslo_concurrency.processutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0a1c6209-ffc3-440e-8934-3aa835a3a5f3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.916 2 DEBUG oslo_concurrency.processutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0a1c6209-ffc3-440e-8934-3aa835a3a5f3/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.918 2 DEBUG nova.virt.disk.api [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Cannot resize image /var/lib/nova/instances/0a1c6209-ffc3-440e-8934-3aa835a3a5f3/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.918 2 DEBUG nova.objects.instance [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'migration_context' on Instance uuid 0a1c6209-ffc3-440e-8934-3aa835a3a5f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.938 2 DEBUG oslo_concurrency.processutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/0a1c6209-ffc3-440e-8934-3aa835a3a5f3/disk.config 485376 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.964 2 DEBUG oslo_concurrency.processutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/0a1c6209-ffc3-440e-8934-3aa835a3a5f3/disk.config 485376" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.966 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Copying file compute-0.ctlplane.example.com:/var/lib/nova/instances/0a1c6209-ffc3-440e-8934-3aa835a3a5f3/disk.config to /var/lib/nova/instances/0a1c6209-ffc3-440e-8934-3aa835a3a5f3 copy_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:103#033[00m
Oct 13 12:06:24 np0005485008 nova_compute[192512]: 2025-10-13 16:06:24.967 2 DEBUG oslo_concurrency.processutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/0a1c6209-ffc3-440e-8934-3aa835a3a5f3/disk.config /var/lib/nova/instances/0a1c6209-ffc3-440e-8934-3aa835a3a5f3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:06:25 np0005485008 nova_compute[192512]: 2025-10-13 16:06:25.391 2 DEBUG oslo_concurrency.processutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/0a1c6209-ffc3-440e-8934-3aa835a3a5f3/disk.config /var/lib/nova/instances/0a1c6209-ffc3-440e-8934-3aa835a3a5f3" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:06:25 np0005485008 nova_compute[192512]: 2025-10-13 16:06:25.392 2 DEBUG nova.virt.libvirt.driver [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Oct 13 12:06:25 np0005485008 nova_compute[192512]: 2025-10-13 16:06:25.393 2 DEBUG nova.virt.libvirt.vif [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T16:04:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-2055945873',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-2055945873',id=21,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T16:05:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-1fkgi1bt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T16:05:02Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=0a1c6209-ffc3-440e-8934-3aa835a3a5f3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b1ec4ae-579f-4e07-8443-31db62f8caff", "address": "fa:16:3e:82:bb:95", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap3b1ec4ae-57", "ovs_interfaceid": "3b1ec4ae-579f-4e07-8443-31db62f8caff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 12:06:25 np0005485008 nova_compute[192512]: 2025-10-13 16:06:25.394 2 DEBUG nova.network.os_vif_util [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converting VIF {"id": "3b1ec4ae-579f-4e07-8443-31db62f8caff", "address": "fa:16:3e:82:bb:95", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap3b1ec4ae-57", "ovs_interfaceid": "3b1ec4ae-579f-4e07-8443-31db62f8caff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:06:25 np0005485008 nova_compute[192512]: 2025-10-13 16:06:25.395 2 DEBUG nova.network.os_vif_util [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:82:bb:95,bridge_name='br-int',has_traffic_filtering=True,id=3b1ec4ae-579f-4e07-8443-31db62f8caff,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b1ec4ae-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:06:25 np0005485008 nova_compute[192512]: 2025-10-13 16:06:25.395 2 DEBUG os_vif [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:82:bb:95,bridge_name='br-int',has_traffic_filtering=True,id=3b1ec4ae-579f-4e07-8443-31db62f8caff,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b1ec4ae-57') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 12:06:25 np0005485008 nova_compute[192512]: 2025-10-13 16:06:25.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:25 np0005485008 nova_compute[192512]: 2025-10-13 16:06:25.396 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:06:25 np0005485008 nova_compute[192512]: 2025-10-13 16:06:25.397 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:06:25 np0005485008 nova_compute[192512]: 2025-10-13 16:06:25.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:25 np0005485008 nova_compute[192512]: 2025-10-13 16:06:25.400 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b1ec4ae-57, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:06:25 np0005485008 nova_compute[192512]: 2025-10-13 16:06:25.400 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3b1ec4ae-57, col_values=(('external_ids', {'iface-id': '3b1ec4ae-579f-4e07-8443-31db62f8caff', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:82:bb:95', 'vm-uuid': '0a1c6209-ffc3-440e-8934-3aa835a3a5f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:06:25 np0005485008 NetworkManager[51587]: <info>  [1760371585.4035] manager: (tap3b1ec4ae-57): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Oct 13 12:06:25 np0005485008 nova_compute[192512]: 2025-10-13 16:06:25.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:06:25 np0005485008 nova_compute[192512]: 2025-10-13 16:06:25.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:25 np0005485008 nova_compute[192512]: 2025-10-13 16:06:25.413 2 INFO os_vif [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:82:bb:95,bridge_name='br-int',has_traffic_filtering=True,id=3b1ec4ae-579f-4e07-8443-31db62f8caff,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b1ec4ae-57')#033[00m
Oct 13 12:06:25 np0005485008 nova_compute[192512]: 2025-10-13 16:06:25.414 2 DEBUG nova.virt.libvirt.driver [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Oct 13 12:06:25 np0005485008 nova_compute[192512]: 2025-10-13 16:06:25.414 2 DEBUG nova.compute.manager [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpq4czos32',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='0a1c6209-ffc3-440e-8934-3aa835a3a5f3',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Oct 13 12:06:25 np0005485008 nova_compute[192512]: 2025-10-13 16:06:25.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:06:25 np0005485008 nova_compute[192512]: 2025-10-13 16:06:25.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:06:25 np0005485008 nova_compute[192512]: 2025-10-13 16:06:25.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:06:25 np0005485008 nova_compute[192512]: 2025-10-13 16:06:25.450 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:06:25 np0005485008 nova_compute[192512]: 2025-10-13 16:06:25.451 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:06:26 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:26.288 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:06:26 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:26.290 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.449 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.450 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.450 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.450 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.618 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.620 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5867MB free_disk=73.46490478515625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.620 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.621 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.659 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Migration for instance 0a1c6209-ffc3-440e-8934-3aa835a3a5f3 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.659 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Migration for instance 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.704 2 INFO nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Updating resource usage from migration 1de25e83-f3bc-4733-a080-d33e6c649a79#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.705 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Starting to track incoming migration 1de25e83-f3bc-4733-a080-d33e6c649a79 with flavor ddff5c88-cac9-460e-8ffb-1e9b9a7c2a59 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.720 2 INFO nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Updating resource usage from migration 3d8307fe-ff7e-46b9-8d42-3693b30eaa4c#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.721 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Starting to track incoming migration 3d8307fe-ff7e-46b9-8d42-3693b30eaa4c with flavor ddff5c88-cac9-460e-8ffb-1e9b9a7c2a59 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.826 2 DEBUG nova.network.neutron [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Port 3b1ec4ae-579f-4e07-8443-31db62f8caff updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.827 2 DEBUG nova.compute.manager [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpq4czos32',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='0a1c6209-ffc3-440e-8934-3aa835a3a5f3',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.830 2 WARNING nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance 0a1c6209-ffc3-440e-8934-3aa835a3a5f3 has been moved to another host compute-0.ctlplane.example.com(compute-0.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.856 2 WARNING nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9 has been moved to another host compute-0.ctlplane.example.com(compute-0.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.856 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.857 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.919 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.931 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:06:26 np0005485008 systemd[1]: Starting libvirt proxy daemon...
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.960 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:06:26 np0005485008 nova_compute[192512]: 2025-10-13 16:06:26.961 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:06:26 np0005485008 systemd[1]: Started libvirt proxy daemon.
Oct 13 12:06:27 np0005485008 kernel: tap3b1ec4ae-57: entered promiscuous mode
Oct 13 12:06:27 np0005485008 NetworkManager[51587]: <info>  [1760371587.1272] manager: (tap3b1ec4ae-57): new Tun device (/org/freedesktop/NetworkManager/Devices/83)
Oct 13 12:06:27 np0005485008 ovn_controller[94758]: 2025-10-13T16:06:27Z|00228|binding|INFO|Claiming lport 3b1ec4ae-579f-4e07-8443-31db62f8caff for this additional chassis.
Oct 13 12:06:27 np0005485008 ovn_controller[94758]: 2025-10-13T16:06:27Z|00229|binding|INFO|3b1ec4ae-579f-4e07-8443-31db62f8caff: Claiming fa:16:3e:82:bb:95 10.100.0.8
Oct 13 12:06:27 np0005485008 nova_compute[192512]: 2025-10-13 16:06:27.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:27 np0005485008 ovn_controller[94758]: 2025-10-13T16:06:27Z|00230|binding|INFO|Setting lport 3b1ec4ae-579f-4e07-8443-31db62f8caff ovn-installed in OVS
Oct 13 12:06:27 np0005485008 nova_compute[192512]: 2025-10-13 16:06:27.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:27 np0005485008 systemd-udevd[223941]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 12:06:27 np0005485008 systemd-machined[152551]: New machine qemu-19-instance-00000015.
Oct 13 12:06:27 np0005485008 NetworkManager[51587]: <info>  [1760371587.1735] device (tap3b1ec4ae-57): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 12:06:27 np0005485008 NetworkManager[51587]: <info>  [1760371587.1747] device (tap3b1ec4ae-57): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 12:06:27 np0005485008 systemd[1]: Started Virtual Machine qemu-19-instance-00000015.
Oct 13 12:06:28 np0005485008 nova_compute[192512]: 2025-10-13 16:06:28.239 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760371588.2387114, 0a1c6209-ffc3-440e-8934-3aa835a3a5f3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:06:28 np0005485008 nova_compute[192512]: 2025-10-13 16:06:28.240 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] VM Started (Lifecycle Event)#033[00m
Oct 13 12:06:28 np0005485008 nova_compute[192512]: 2025-10-13 16:06:28.269 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:06:28 np0005485008 nova_compute[192512]: 2025-10-13 16:06:28.916 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760371588.9159906, 0a1c6209-ffc3-440e-8934-3aa835a3a5f3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:06:28 np0005485008 nova_compute[192512]: 2025-10-13 16:06:28.917 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] VM Resumed (Lifecycle Event)#033[00m
Oct 13 12:06:28 np0005485008 nova_compute[192512]: 2025-10-13 16:06:28.942 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:06:28 np0005485008 nova_compute[192512]: 2025-10-13 16:06:28.946 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 12:06:28 np0005485008 nova_compute[192512]: 2025-10-13 16:06:28.961 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:06:28 np0005485008 nova_compute[192512]: 2025-10-13 16:06:28.962 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct 13 12:06:30 np0005485008 ovn_controller[94758]: 2025-10-13T16:06:30Z|00231|binding|INFO|Claiming lport 3b1ec4ae-579f-4e07-8443-31db62f8caff for this chassis.
Oct 13 12:06:30 np0005485008 ovn_controller[94758]: 2025-10-13T16:06:30Z|00232|binding|INFO|3b1ec4ae-579f-4e07-8443-31db62f8caff: Claiming fa:16:3e:82:bb:95 10.100.0.8
Oct 13 12:06:30 np0005485008 ovn_controller[94758]: 2025-10-13T16:06:30Z|00233|binding|INFO|Setting lport 3b1ec4ae-579f-4e07-8443-31db62f8caff up in Southbound
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.368 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:bb:95 10.100.0.8'], port_security=['fa:16:3e:82:bb:95 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0a1c6209-ffc3-440e-8934-3aa835a3a5f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=3b1ec4ae-579f-4e07-8443-31db62f8caff) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.369 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 3b1ec4ae-579f-4e07-8443-31db62f8caff in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae bound to our chassis#033[00m
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.370 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae#033[00m
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.385 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[32eaac09-ca25-45d5-b5cb-d94b0f6b458a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.385 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap39a43da9-c1 in ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.389 214965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap39a43da9-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.389 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[c37d9b81-05ef-4386-ad74-e91f1506e651]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.390 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[4629b9c1-562e-492b-bf74-9c28ba98648d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:30 np0005485008 nova_compute[192512]: 2025-10-13 16:06:30.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.410 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5ceca5-67ff-465d-bd5d-1b649681c0bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.429 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[49298611-e8ee-453a-930e-88b16d3d3283]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.466 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[d8b8eddd-c473-483f-8802-856cf2a04ff9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:30 np0005485008 NetworkManager[51587]: <info>  [1760371590.4769] manager: (tap39a43da9-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/84)
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.475 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe3c2e1-f745-4943-8e09-f3bbfc54cc03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:30 np0005485008 systemd-udevd[223977]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.513 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[79a5ee45-2451-4619-8461-37af2decae01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.518 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[0269856e-82df-48f9-ba4a-16962eba6de7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:30 np0005485008 nova_compute[192512]: 2025-10-13 16:06:30.532 2 INFO nova.compute.manager [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Post operation of migration started#033[00m
Oct 13 12:06:30 np0005485008 NetworkManager[51587]: <info>  [1760371590.5469] device (tap39a43da9-c0): carrier: link connected
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.551 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[ab1a4015-197f-421b-903f-61b911cc685d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.575 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[b4beb803-ddd2-49ca-82ae-90e700a9f090]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512821, 'reachable_time': 27106, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 223996, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.594 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[262765b5-a3ac-4028-91f9-acab14ff6183]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:43e9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 512821, 'tstamp': 512821}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 223997, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.614 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[d6c2678a-01bd-408d-b703-a2db4c30352f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512821, 'reachable_time': 27106, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 223998, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.651 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[eaa150cd-0538-4003-97a5-f3251fb214a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.721 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[5035952e-ea69-4a2f-b5cd-40e8a9271ffe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.723 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.723 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.723 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39a43da9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:06:30 np0005485008 NetworkManager[51587]: <info>  [1760371590.7265] manager: (tap39a43da9-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Oct 13 12:06:30 np0005485008 nova_compute[192512]: 2025-10-13 16:06:30.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:30 np0005485008 kernel: tap39a43da9-c0: entered promiscuous mode
Oct 13 12:06:30 np0005485008 nova_compute[192512]: 2025-10-13 16:06:30.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.729 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39a43da9-c0, col_values=(('external_ids', {'iface-id': '5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:06:30 np0005485008 nova_compute[192512]: 2025-10-13 16:06:30.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:30 np0005485008 ovn_controller[94758]: 2025-10-13T16:06:30Z|00234|binding|INFO|Releasing lport 5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182 from this chassis (sb_readonly=0)
Oct 13 12:06:30 np0005485008 nova_compute[192512]: 2025-10-13 16:06:30.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.732 103642 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/39a43da9-cf4c-4fe3-ab73-bf8705320dae.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/39a43da9-cf4c-4fe3-ab73-bf8705320dae.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.733 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[63fc5896-ef98-4417-81f0-d54654961da8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.734 103642 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: global
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]:    log         /dev/log local0 debug
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]:    log-tag     haproxy-metadata-proxy-39a43da9-cf4c-4fe3-ab73-bf8705320dae
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]:    user        root
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]:    group       root
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]:    maxconn     1024
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]:    pidfile     /var/lib/neutron/external/pids/39a43da9-cf4c-4fe3-ab73-bf8705320dae.pid.haproxy
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]:    daemon
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: defaults
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]:    log global
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]:    mode http
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]:    option httplog
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]:    option dontlognull
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]:    option http-server-close
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]:    option forwardfor
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]:    retries                 3
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]:    timeout http-request    30s
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]:    timeout connect         30s
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]:    timeout client          32s
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]:    timeout server          32s
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]:    timeout http-keep-alive 30s
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: listen listener
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]:    bind 169.254.169.254:80
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]:    server metadata /var/lib/neutron/metadata_proxy
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]:    http-request add-header X-OVN-Network-ID 39a43da9-cf4c-4fe3-ab73-bf8705320dae
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 13 12:06:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:30.734 103642 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'env', 'PROCESS_TAG=haproxy-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/39a43da9-cf4c-4fe3-ab73-bf8705320dae.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 13 12:06:30 np0005485008 nova_compute[192512]: 2025-10-13 16:06:30.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:31 np0005485008 podman[224030]: 2025-10-13 16:06:31.134775607 +0000 UTC m=+0.060874687 container create ec345dba6e9e8ed08e1ebcf5206f29c5d98f6a3151c06f8fcdf0a2e8c0b3b1a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 12:06:31 np0005485008 systemd[1]: Started libpod-conmon-ec345dba6e9e8ed08e1ebcf5206f29c5d98f6a3151c06f8fcdf0a2e8c0b3b1a1.scope.
Oct 13 12:06:31 np0005485008 podman[224030]: 2025-10-13 16:06:31.102892778 +0000 UTC m=+0.028991838 image pull f5d8e43d5fd113645e71c7e8980543c233298a7561a4c95ab3af27cb119e70b7 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 13 12:06:31 np0005485008 systemd[1]: Started libcrun container.
Oct 13 12:06:31 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ca346fd3bd3ccc0a4781c775c17978d156560190d23e5b0c60782decf03f85/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 12:06:31 np0005485008 podman[224030]: 2025-10-13 16:06:31.231719913 +0000 UTC m=+0.157818983 container init ec345dba6e9e8ed08e1ebcf5206f29c5d98f6a3151c06f8fcdf0a2e8c0b3b1a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 13 12:06:31 np0005485008 podman[224030]: 2025-10-13 16:06:31.238012722 +0000 UTC m=+0.164111762 container start ec345dba6e9e8ed08e1ebcf5206f29c5d98f6a3151c06f8fcdf0a2e8c0b3b1a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 12:06:31 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[224045]: [NOTICE]   (224049) : New worker (224051) forked
Oct 13 12:06:31 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[224045]: [NOTICE]   (224049) : Loading success.
Oct 13 12:06:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:31.292 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:06:31 np0005485008 nova_compute[192512]: 2025-10-13 16:06:31.302 2 DEBUG oslo_concurrency.lockutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-0a1c6209-ffc3-440e-8934-3aa835a3a5f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:06:31 np0005485008 nova_compute[192512]: 2025-10-13 16:06:31.303 2 DEBUG oslo_concurrency.lockutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-0a1c6209-ffc3-440e-8934-3aa835a3a5f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:06:31 np0005485008 nova_compute[192512]: 2025-10-13 16:06:31.303 2 DEBUG nova.network.neutron [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 12:06:31 np0005485008 nova_compute[192512]: 2025-10-13 16:06:31.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:32 np0005485008 nova_compute[192512]: 2025-10-13 16:06:32.770 2 DEBUG nova.network.neutron [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Updating instance_info_cache with network_info: [{"id": "3b1ec4ae-579f-4e07-8443-31db62f8caff", "address": "fa:16:3e:82:bb:95", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b1ec4ae-57", "ovs_interfaceid": "3b1ec4ae-579f-4e07-8443-31db62f8caff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:06:32 np0005485008 nova_compute[192512]: 2025-10-13 16:06:32.794 2 DEBUG oslo_concurrency.lockutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-0a1c6209-ffc3-440e-8934-3aa835a3a5f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:06:32 np0005485008 nova_compute[192512]: 2025-10-13 16:06:32.809 2 DEBUG oslo_concurrency.lockutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:06:32 np0005485008 nova_compute[192512]: 2025-10-13 16:06:32.809 2 DEBUG oslo_concurrency.lockutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:06:32 np0005485008 nova_compute[192512]: 2025-10-13 16:06:32.810 2 DEBUG oslo_concurrency.lockutils [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:06:32 np0005485008 nova_compute[192512]: 2025-10-13 16:06:32.814 2 INFO nova.virt.libvirt.driver [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Oct 13 12:06:32 np0005485008 virtqemud[192082]: Domain id=19 name='instance-00000015' uuid=0a1c6209-ffc3-440e-8934-3aa835a3a5f3 is tainted: custom-monitor
Oct 13 12:06:33 np0005485008 nova_compute[192512]: 2025-10-13 16:06:33.822 2 INFO nova.virt.libvirt.driver [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Oct 13 12:06:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:33.971 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:06:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:33.972 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:06:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:33.972 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:06:34 np0005485008 nova_compute[192512]: 2025-10-13 16:06:34.828 2 INFO nova.virt.libvirt.driver [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Oct 13 12:06:34 np0005485008 nova_compute[192512]: 2025-10-13 16:06:34.835 2 DEBUG nova.compute.manager [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:06:34 np0005485008 nova_compute[192512]: 2025-10-13 16:06:34.858 2 DEBUG nova.objects.instance [None req-85b1f4b3-9720-4110-9d03-6fbc3f114818 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 13 12:06:35 np0005485008 nova_compute[192512]: 2025-10-13 16:06:35.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:35 np0005485008 podman[202884]: time="2025-10-13T16:06:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:06:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:06:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 12:06:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:06:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3465 "" "Go-http-client/1.1"
Oct 13 12:06:36 np0005485008 nova_compute[192512]: 2025-10-13 16:06:36.358 2 DEBUG nova.compute.manager [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp2sj039pg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Oct 13 12:06:36 np0005485008 nova_compute[192512]: 2025-10-13 16:06:36.410 2 DEBUG oslo_concurrency.lockutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:06:36 np0005485008 nova_compute[192512]: 2025-10-13 16:06:36.410 2 DEBUG oslo_concurrency.lockutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:06:36 np0005485008 nova_compute[192512]: 2025-10-13 16:06:36.410 2 DEBUG nova.network.neutron [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 12:06:36 np0005485008 nova_compute[192512]: 2025-10-13 16:06:36.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:37 np0005485008 podman[224063]: 2025-10-13 16:06:37.766713201 +0000 UTC m=+0.060924337 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009)
Oct 13 12:06:37 np0005485008 podman[224062]: 2025-10-13 16:06:37.767318761 +0000 UTC m=+0.063451629 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3)
Oct 13 12:06:37 np0005485008 podman[224064]: 2025-10-13 16:06:37.782949745 +0000 UTC m=+0.069879301 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 12:06:37 np0005485008 podman[224061]: 2025-10-13 16:06:37.796543024 +0000 UTC m=+0.097216705 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 12:06:37 np0005485008 podman[224070]: 2025-10-13 16:06:37.807910094 +0000 UTC m=+0.091367090 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.251 2 DEBUG nova.network.neutron [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Updating instance_info_cache with network_info: [{"id": "8848368e-f2a5-456a-b7a5-db2ffc930550", "address": "fa:16:3e:69:6b:d8", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8848368e-f2", "ovs_interfaceid": "8848368e-f2a5-456a-b7a5-db2ffc930550", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.268 2 DEBUG oslo_concurrency.lockutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.270 2 DEBUG nova.virt.libvirt.driver [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp2sj039pg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.270 2 DEBUG nova.virt.libvirt.driver [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Creating instance directory: /var/lib/nova/instances/74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.271 2 DEBUG nova.virt.libvirt.driver [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Creating disk.info with the contents: {'/var/lib/nova/instances/74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9/disk': 'qcow2', '/var/lib/nova/instances/74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9/disk.config': 'raw'} pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10854#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.271 2 DEBUG nova.virt.libvirt.driver [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10864#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.272 2 DEBUG nova.objects.instance [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.304 2 DEBUG oslo_concurrency.processutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.365 2 DEBUG oslo_concurrency.processutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.366 2 DEBUG oslo_concurrency.lockutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.367 2 DEBUG oslo_concurrency.lockutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.378 2 DEBUG oslo_concurrency.processutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.439 2 DEBUG oslo_concurrency.processutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.440 2 DEBUG oslo_concurrency.processutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.481 2 DEBUG oslo_concurrency.processutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9/disk 1073741824" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.482 2 DEBUG oslo_concurrency.lockutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.482 2 DEBUG oslo_concurrency.processutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.572 2 DEBUG oslo_concurrency.processutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.574 2 DEBUG nova.virt.disk.api [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Checking if we can resize image /var/lib/nova/instances/74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.575 2 DEBUG oslo_concurrency.processutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.636 2 DEBUG oslo_concurrency.processutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.637 2 DEBUG nova.virt.disk.api [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Cannot resize image /var/lib/nova/instances/74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.638 2 DEBUG nova.objects.instance [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'migration_context' on Instance uuid 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.653 2 DEBUG oslo_concurrency.processutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9/disk.config 485376 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.676 2 DEBUG oslo_concurrency.processutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9/disk.config 485376" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.678 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Copying file compute-0.ctlplane.example.com:/var/lib/nova/instances/74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9/disk.config to /var/lib/nova/instances/74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9 copy_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:103#033[00m
Oct 13 12:06:39 np0005485008 nova_compute[192512]: 2025-10-13 16:06:39.678 2 DEBUG oslo_concurrency.processutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9/disk.config /var/lib/nova/instances/74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:06:40 np0005485008 nova_compute[192512]: 2025-10-13 16:06:40.116 2 DEBUG oslo_concurrency.processutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9/disk.config /var/lib/nova/instances/74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:06:40 np0005485008 nova_compute[192512]: 2025-10-13 16:06:40.118 2 DEBUG nova.virt.libvirt.driver [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Oct 13 12:06:40 np0005485008 nova_compute[192512]: 2025-10-13 16:06:40.121 2 DEBUG nova.virt.libvirt.vif [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T16:05:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-75064647',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-75064647',id=22,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T16:05:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-td6jg9rk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T16:05:19Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8848368e-f2a5-456a-b7a5-db2ffc930550", "address": "fa:16:3e:69:6b:d8", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap8848368e-f2", "ovs_interfaceid": "8848368e-f2a5-456a-b7a5-db2ffc930550", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 12:06:40 np0005485008 nova_compute[192512]: 2025-10-13 16:06:40.121 2 DEBUG nova.network.os_vif_util [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converting VIF {"id": "8848368e-f2a5-456a-b7a5-db2ffc930550", "address": "fa:16:3e:69:6b:d8", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap8848368e-f2", "ovs_interfaceid": "8848368e-f2a5-456a-b7a5-db2ffc930550", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:06:40 np0005485008 nova_compute[192512]: 2025-10-13 16:06:40.124 2 DEBUG nova.network.os_vif_util [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:6b:d8,bridge_name='br-int',has_traffic_filtering=True,id=8848368e-f2a5-456a-b7a5-db2ffc930550,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8848368e-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:06:40 np0005485008 nova_compute[192512]: 2025-10-13 16:06:40.125 2 DEBUG os_vif [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:6b:d8,bridge_name='br-int',has_traffic_filtering=True,id=8848368e-f2a5-456a-b7a5-db2ffc930550,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8848368e-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 12:06:40 np0005485008 nova_compute[192512]: 2025-10-13 16:06:40.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:40 np0005485008 nova_compute[192512]: 2025-10-13 16:06:40.128 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:06:40 np0005485008 nova_compute[192512]: 2025-10-13 16:06:40.129 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:06:40 np0005485008 nova_compute[192512]: 2025-10-13 16:06:40.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:40 np0005485008 nova_compute[192512]: 2025-10-13 16:06:40.133 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8848368e-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:06:40 np0005485008 nova_compute[192512]: 2025-10-13 16:06:40.134 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8848368e-f2, col_values=(('external_ids', {'iface-id': '8848368e-f2a5-456a-b7a5-db2ffc930550', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:6b:d8', 'vm-uuid': '74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:06:40 np0005485008 nova_compute[192512]: 2025-10-13 16:06:40.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:40 np0005485008 NetworkManager[51587]: <info>  [1760371600.1377] manager: (tap8848368e-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Oct 13 12:06:40 np0005485008 nova_compute[192512]: 2025-10-13 16:06:40.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:06:40 np0005485008 nova_compute[192512]: 2025-10-13 16:06:40.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:40 np0005485008 nova_compute[192512]: 2025-10-13 16:06:40.147 2 INFO os_vif [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:6b:d8,bridge_name='br-int',has_traffic_filtering=True,id=8848368e-f2a5-456a-b7a5-db2ffc930550,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8848368e-f2')#033[00m
Oct 13 12:06:40 np0005485008 nova_compute[192512]: 2025-10-13 16:06:40.147 2 DEBUG nova.virt.libvirt.driver [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Oct 13 12:06:40 np0005485008 nova_compute[192512]: 2025-10-13 16:06:40.147 2 DEBUG nova.compute.manager [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp2sj039pg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Oct 13 12:06:41 np0005485008 nova_compute[192512]: 2025-10-13 16:06:41.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:42 np0005485008 nova_compute[192512]: 2025-10-13 16:06:42.285 2 DEBUG nova.network.neutron [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Port 8848368e-f2a5-456a-b7a5-db2ffc930550 updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Oct 13 12:06:42 np0005485008 nova_compute[192512]: 2025-10-13 16:06:42.287 2 DEBUG nova.compute.manager [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp2sj039pg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Oct 13 12:06:42 np0005485008 kernel: tap8848368e-f2: entered promiscuous mode
Oct 13 12:06:42 np0005485008 NetworkManager[51587]: <info>  [1760371602.5212] manager: (tap8848368e-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Oct 13 12:06:42 np0005485008 nova_compute[192512]: 2025-10-13 16:06:42.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:42 np0005485008 ovn_controller[94758]: 2025-10-13T16:06:42Z|00235|binding|INFO|Claiming lport 8848368e-f2a5-456a-b7a5-db2ffc930550 for this additional chassis.
Oct 13 12:06:42 np0005485008 ovn_controller[94758]: 2025-10-13T16:06:42Z|00236|binding|INFO|8848368e-f2a5-456a-b7a5-db2ffc930550: Claiming fa:16:3e:69:6b:d8 10.100.0.13
Oct 13 12:06:42 np0005485008 ovn_controller[94758]: 2025-10-13T16:06:42Z|00237|binding|INFO|Setting lport 8848368e-f2a5-456a-b7a5-db2ffc930550 ovn-installed in OVS
Oct 13 12:06:42 np0005485008 nova_compute[192512]: 2025-10-13 16:06:42.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:42 np0005485008 nova_compute[192512]: 2025-10-13 16:06:42.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:42 np0005485008 nova_compute[192512]: 2025-10-13 16:06:42.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:42 np0005485008 systemd-udevd[224201]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 12:06:42 np0005485008 systemd-machined[152551]: New machine qemu-20-instance-00000016.
Oct 13 12:06:42 np0005485008 NetworkManager[51587]: <info>  [1760371602.5911] device (tap8848368e-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 12:06:42 np0005485008 systemd[1]: Started Virtual Machine qemu-20-instance-00000016.
Oct 13 12:06:42 np0005485008 NetworkManager[51587]: <info>  [1760371602.5917] device (tap8848368e-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 12:06:43 np0005485008 nova_compute[192512]: 2025-10-13 16:06:43.943 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760371603.942219, 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:06:43 np0005485008 nova_compute[192512]: 2025-10-13 16:06:43.946 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] VM Started (Lifecycle Event)#033[00m
Oct 13 12:06:43 np0005485008 nova_compute[192512]: 2025-10-13 16:06:43.977 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:06:44 np0005485008 nova_compute[192512]: 2025-10-13 16:06:44.657 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760371604.6567438, 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:06:44 np0005485008 nova_compute[192512]: 2025-10-13 16:06:44.658 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] VM Resumed (Lifecycle Event)#033[00m
Oct 13 12:06:44 np0005485008 nova_compute[192512]: 2025-10-13 16:06:44.685 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:06:44 np0005485008 nova_compute[192512]: 2025-10-13 16:06:44.690 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 12:06:44 np0005485008 nova_compute[192512]: 2025-10-13 16:06:44.717 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct 13 12:06:45 np0005485008 nova_compute[192512]: 2025-10-13 16:06:45.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:46 np0005485008 nova_compute[192512]: 2025-10-13 16:06:46.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:48 np0005485008 ovn_controller[94758]: 2025-10-13T16:06:48Z|00238|binding|INFO|Claiming lport 8848368e-f2a5-456a-b7a5-db2ffc930550 for this chassis.
Oct 13 12:06:48 np0005485008 ovn_controller[94758]: 2025-10-13T16:06:48Z|00239|binding|INFO|8848368e-f2a5-456a-b7a5-db2ffc930550: Claiming fa:16:3e:69:6b:d8 10.100.0.13
Oct 13 12:06:48 np0005485008 ovn_controller[94758]: 2025-10-13T16:06:48Z|00240|binding|INFO|Setting lport 8848368e-f2a5-456a-b7a5-db2ffc930550 up in Southbound
Oct 13 12:06:48 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:48.710 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:6b:d8 10.100.0.13'], port_security=['fa:16:3e:69:6b:d8 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=8848368e-f2a5-456a-b7a5-db2ffc930550) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:06:48 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:48.712 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 8848368e-f2a5-456a-b7a5-db2ffc930550 in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae bound to our chassis#033[00m
Oct 13 12:06:48 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:48.713 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae#033[00m
Oct 13 12:06:48 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:48.735 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[5b49b4d4-8b79-47c1-8529-2511b18e9e64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:48 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:48.771 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[56de6445-873a-42aa-8b17-dbaf6b9958ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:48 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:48.776 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[ae9e8d71-07b0-40af-895b-cdf51d47e621]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:48 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:48.812 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[05d0c55f-9925-478a-a618-ec6545b79427]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:48 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:48.838 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f28825-04e6-49dd-9b88-d472943067ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 28, 'tx_packets': 5, 'rx_bytes': 1672, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 28, 'tx_packets': 5, 'rx_bytes': 1672, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512821, 'reachable_time': 27106, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 224228, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:48 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:48.864 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[1cdaec25-5f6c-43e2-86e1-70874aec4e2b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 512835, 'tstamp': 512835}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 224229, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 512837, 'tstamp': 512837}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 224229, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:48 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:48.866 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:06:48 np0005485008 nova_compute[192512]: 2025-10-13 16:06:48.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:48 np0005485008 nova_compute[192512]: 2025-10-13 16:06:48.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:48 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:48.870 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39a43da9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:06:48 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:48.870 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:06:48 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:48.871 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39a43da9-c0, col_values=(('external_ids', {'iface-id': '5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:06:48 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:48.871 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:06:49 np0005485008 nova_compute[192512]: 2025-10-13 16:06:49.254 2 INFO nova.compute.manager [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Post operation of migration started#033[00m
Oct 13 12:06:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:06:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:06:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:06:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:06:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:06:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:06:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:06:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:06:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:06:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:06:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:06:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:06:50 np0005485008 nova_compute[192512]: 2025-10-13 16:06:50.026 2 DEBUG oslo_concurrency.lockutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:06:50 np0005485008 nova_compute[192512]: 2025-10-13 16:06:50.027 2 DEBUG oslo_concurrency.lockutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:06:50 np0005485008 nova_compute[192512]: 2025-10-13 16:06:50.027 2 DEBUG nova.network.neutron [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 12:06:50 np0005485008 nova_compute[192512]: 2025-10-13 16:06:50.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:51 np0005485008 nova_compute[192512]: 2025-10-13 16:06:51.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:51 np0005485008 podman[224230]: 2025-10-13 16:06:51.817947491 +0000 UTC m=+0.107345987 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vendor=Red Hat, Inc.)
Oct 13 12:06:52 np0005485008 nova_compute[192512]: 2025-10-13 16:06:52.493 2 DEBUG nova.network.neutron [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Updating instance_info_cache with network_info: [{"id": "8848368e-f2a5-456a-b7a5-db2ffc930550", "address": "fa:16:3e:69:6b:d8", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8848368e-f2", "ovs_interfaceid": "8848368e-f2a5-456a-b7a5-db2ffc930550", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:06:52 np0005485008 nova_compute[192512]: 2025-10-13 16:06:52.526 2 DEBUG oslo_concurrency.lockutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:06:52 np0005485008 nova_compute[192512]: 2025-10-13 16:06:52.546 2 DEBUG oslo_concurrency.lockutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:06:52 np0005485008 nova_compute[192512]: 2025-10-13 16:06:52.546 2 DEBUG oslo_concurrency.lockutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:06:52 np0005485008 nova_compute[192512]: 2025-10-13 16:06:52.547 2 DEBUG oslo_concurrency.lockutils [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:06:52 np0005485008 nova_compute[192512]: 2025-10-13 16:06:52.551 2 INFO nova.virt.libvirt.driver [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Oct 13 12:06:52 np0005485008 virtqemud[192082]: Domain id=20 name='instance-00000016' uuid=74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9 is tainted: custom-monitor
Oct 13 12:06:53 np0005485008 nova_compute[192512]: 2025-10-13 16:06:53.558 2 INFO nova.virt.libvirt.driver [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Oct 13 12:06:54 np0005485008 nova_compute[192512]: 2025-10-13 16:06:54.565 2 INFO nova.virt.libvirt.driver [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Oct 13 12:06:54 np0005485008 nova_compute[192512]: 2025-10-13 16:06:54.571 2 DEBUG nova.compute.manager [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:06:54 np0005485008 nova_compute[192512]: 2025-10-13 16:06:54.595 2 DEBUG nova.objects.instance [None req-63f384d7-273b-4a21-932d-95e7b50c4c16 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 13 12:06:55 np0005485008 nova_compute[192512]: 2025-10-13 16:06:55.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:56 np0005485008 nova_compute[192512]: 2025-10-13 16:06:56.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:58 np0005485008 nova_compute[192512]: 2025-10-13 16:06:58.918 2 DEBUG oslo_concurrency.lockutils [None req-87dbcd9d-f0ba-4c24-9867-95479ea4b3bc 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:06:58 np0005485008 nova_compute[192512]: 2025-10-13 16:06:58.918 2 DEBUG oslo_concurrency.lockutils [None req-87dbcd9d-f0ba-4c24-9867-95479ea4b3bc 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:06:58 np0005485008 nova_compute[192512]: 2025-10-13 16:06:58.919 2 DEBUG oslo_concurrency.lockutils [None req-87dbcd9d-f0ba-4c24-9867-95479ea4b3bc 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:06:58 np0005485008 nova_compute[192512]: 2025-10-13 16:06:58.919 2 DEBUG oslo_concurrency.lockutils [None req-87dbcd9d-f0ba-4c24-9867-95479ea4b3bc 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:06:58 np0005485008 nova_compute[192512]: 2025-10-13 16:06:58.919 2 DEBUG oslo_concurrency.lockutils [None req-87dbcd9d-f0ba-4c24-9867-95479ea4b3bc 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:06:58 np0005485008 nova_compute[192512]: 2025-10-13 16:06:58.920 2 INFO nova.compute.manager [None req-87dbcd9d-f0ba-4c24-9867-95479ea4b3bc 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Terminating instance#033[00m
Oct 13 12:06:58 np0005485008 nova_compute[192512]: 2025-10-13 16:06:58.921 2 DEBUG nova.compute.manager [None req-87dbcd9d-f0ba-4c24-9867-95479ea4b3bc 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 13 12:06:58 np0005485008 kernel: tap8848368e-f2 (unregistering): left promiscuous mode
Oct 13 12:06:58 np0005485008 NetworkManager[51587]: <info>  [1760371618.9467] device (tap8848368e-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 12:06:58 np0005485008 nova_compute[192512]: 2025-10-13 16:06:58.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:58 np0005485008 ovn_controller[94758]: 2025-10-13T16:06:58Z|00241|binding|INFO|Releasing lport 8848368e-f2a5-456a-b7a5-db2ffc930550 from this chassis (sb_readonly=0)
Oct 13 12:06:58 np0005485008 ovn_controller[94758]: 2025-10-13T16:06:58Z|00242|binding|INFO|Setting lport 8848368e-f2a5-456a-b7a5-db2ffc930550 down in Southbound
Oct 13 12:06:58 np0005485008 ovn_controller[94758]: 2025-10-13T16:06:58Z|00243|binding|INFO|Removing iface tap8848368e-f2 ovn-installed in OVS
Oct 13 12:06:58 np0005485008 nova_compute[192512]: 2025-10-13 16:06:58.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:58.964 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:6b:d8 10.100.0.13'], port_security=['fa:16:3e:69:6b:d8 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '13', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=8848368e-f2a5-456a-b7a5-db2ffc930550) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:06:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:58.966 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 8848368e-f2a5-456a-b7a5-db2ffc930550 in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae unbound from our chassis#033[00m
Oct 13 12:06:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:58.967 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae#033[00m
Oct 13 12:06:58 np0005485008 nova_compute[192512]: 2025-10-13 16:06:58.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:58.990 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[bd46167a-2842-474b-8829-081b41a11569]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:59 np0005485008 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000016.scope: Deactivated successfully.
Oct 13 12:06:59 np0005485008 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000016.scope: Consumed 2.400s CPU time.
Oct 13 12:06:59 np0005485008 systemd-machined[152551]: Machine qemu-20-instance-00000016 terminated.
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.027 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[319b7cac-8bf4-4034-9023-9372948d8035]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.030 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[b1fdb894-88db-47d2-a196-b4d6b69bd308]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.074 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[fde8d2e0-14fe-4b4b-9359-a4cd3783340d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.101 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[f46a6f9f-6ded-414d-8b58-8e38bba74593]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 43, 'tx_packets': 7, 'rx_bytes': 2302, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 43, 'tx_packets': 7, 'rx_bytes': 2302, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512821, 'reachable_time': 27106, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 224264, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.125 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[651cee1f-d400-4632-be25-66ef4fa66226]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 512835, 'tstamp': 512835}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 224265, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 512837, 'tstamp': 512837}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 224265, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.128 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:59 np0005485008 kernel: tap8848368e-f2: entered promiscuous mode
Oct 13 12:06:59 np0005485008 NetworkManager[51587]: <info>  [1760371619.1821] manager: (tap8848368e-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/88)
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:59 np0005485008 ovn_controller[94758]: 2025-10-13T16:06:59Z|00244|binding|INFO|Claiming lport 8848368e-f2a5-456a-b7a5-db2ffc930550 for this chassis.
Oct 13 12:06:59 np0005485008 ovn_controller[94758]: 2025-10-13T16:06:59Z|00245|binding|INFO|8848368e-f2a5-456a-b7a5-db2ffc930550: Claiming fa:16:3e:69:6b:d8 10.100.0.13
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.183 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39a43da9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.184 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.184 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39a43da9-c0, col_values=(('external_ids', {'iface-id': '5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.184 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:06:59 np0005485008 kernel: tap8848368e-f2 (unregistering): left promiscuous mode
Oct 13 12:06:59 np0005485008 ovn_controller[94758]: 2025-10-13T16:06:59Z|00246|binding|INFO|Setting lport 8848368e-f2a5-456a-b7a5-db2ffc930550 ovn-installed in OVS
Oct 13 12:06:59 np0005485008 ovn_controller[94758]: 2025-10-13T16:06:59Z|00247|if_status|INFO|Dropped 2 log messages in last 608 seconds (most recently, 608 seconds ago) due to excessive rate
Oct 13 12:06:59 np0005485008 ovn_controller[94758]: 2025-10-13T16:06:59Z|00248|if_status|INFO|Not setting lport 8848368e-f2a5-456a-b7a5-db2ffc930550 down as sb is readonly
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:59 np0005485008 ovn_controller[94758]: 2025-10-13T16:06:59Z|00249|binding|INFO|Releasing lport 8848368e-f2a5-456a-b7a5-db2ffc930550 from this chassis (sb_readonly=0)
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.243 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:6b:d8 10.100.0.13'], port_security=['fa:16:3e:69:6b:d8 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '13', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=8848368e-f2a5-456a-b7a5-db2ffc930550) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.246 2 INFO nova.virt.libvirt.driver [-] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Instance destroyed successfully.#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.246 2 DEBUG nova.objects.instance [None req-87dbcd9d-f0ba-4c24-9867-95479ea4b3bc 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lazy-loading 'resources' on Instance uuid 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.247 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 8848368e-f2a5-456a-b7a5-db2ffc930550 in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae bound to our chassis#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.248 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:6b:d8 10.100.0.13'], port_security=['fa:16:3e:69:6b:d8 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '13', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=8848368e-f2a5-456a-b7a5-db2ffc930550) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.250 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.265 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[c20e4b50-00fa-4d0d-b9d3-1575ffc44042]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.284 2 DEBUG nova.virt.libvirt.vif [None req-87dbcd9d-f0ba-4c24-9867-95479ea4b3bc 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-13T16:05:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-75064647',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-75064647',id=22,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T16:05:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-td6jg9rk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',clean_attempts='1',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T16:06:54Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8848368e-f2a5-456a-b7a5-db2ffc930550", "address": "fa:16:3e:69:6b:d8", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8848368e-f2", "ovs_interfaceid": "8848368e-f2a5-456a-b7a5-db2ffc930550", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.285 2 DEBUG nova.network.os_vif_util [None req-87dbcd9d-f0ba-4c24-9867-95479ea4b3bc 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converting VIF {"id": "8848368e-f2a5-456a-b7a5-db2ffc930550", "address": "fa:16:3e:69:6b:d8", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8848368e-f2", "ovs_interfaceid": "8848368e-f2a5-456a-b7a5-db2ffc930550", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.285 2 DEBUG nova.network.os_vif_util [None req-87dbcd9d-f0ba-4c24-9867-95479ea4b3bc 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:69:6b:d8,bridge_name='br-int',has_traffic_filtering=True,id=8848368e-f2a5-456a-b7a5-db2ffc930550,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8848368e-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.286 2 DEBUG os_vif [None req-87dbcd9d-f0ba-4c24-9867-95479ea4b3bc 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:6b:d8,bridge_name='br-int',has_traffic_filtering=True,id=8848368e-f2a5-456a-b7a5-db2ffc930550,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8848368e-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.288 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8848368e-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.293 2 INFO os_vif [None req-87dbcd9d-f0ba-4c24-9867-95479ea4b3bc 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:6b:d8,bridge_name='br-int',has_traffic_filtering=True,id=8848368e-f2a5-456a-b7a5-db2ffc930550,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8848368e-f2')#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.294 2 INFO nova.virt.libvirt.driver [None req-87dbcd9d-f0ba-4c24-9867-95479ea4b3bc 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Deleting instance files /var/lib/nova/instances/74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9_del#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.295 2 INFO nova.virt.libvirt.driver [None req-87dbcd9d-f0ba-4c24-9867-95479ea4b3bc 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Deletion of /var/lib/nova/instances/74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9_del complete#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.307 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[83b634f9-ebe3-4477-8c48-3707eeb3a5ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.311 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[c584a3f1-7130-4294-8d2a-87828c2738ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.354 2 INFO nova.compute.manager [None req-87dbcd9d-f0ba-4c24-9867-95479ea4b3bc 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Took 0.43 seconds to destroy the instance on the hypervisor.#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.355 2 DEBUG oslo.service.loopingcall [None req-87dbcd9d-f0ba-4c24-9867-95479ea4b3bc 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.355 2 DEBUG nova.compute.manager [-] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.355 2 DEBUG nova.network.neutron [-] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.354 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[f65e489f-5e09-4e13-bbc3-da9346646af0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.375 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[07652569-9379-4eb3-bbca-519be44aa5f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 43, 'tx_packets': 9, 'rx_bytes': 2302, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 43, 'tx_packets': 9, 'rx_bytes': 2302, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512821, 'reachable_time': 27106, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 224282, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.399 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[35c67168-e480-447b-8e44-82df3b18e582]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 512835, 'tstamp': 512835}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 224283, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 512837, 'tstamp': 512837}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 224283, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.401 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.405 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39a43da9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.405 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.406 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39a43da9-c0, col_values=(('external_ids', {'iface-id': '5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.406 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.407 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 8848368e-f2a5-456a-b7a5-db2ffc930550 in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae unbound from our chassis#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.408 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.431 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[d2e232fa-18c9-4889-8b13-244304c96d7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.476 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[58c6428c-bdf6-4ccb-bfd0-02845d8d6c6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.481 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[b03f38d8-c088-4b33-9358-8b74c3f232f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.521 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc14e53-be14-4fb0-8f92-8d8fabfa5150]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.550 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[369478cd-9c1e-436d-8580-c90656d24278]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 43, 'tx_packets': 11, 'rx_bytes': 2302, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 43, 'tx_packets': 11, 'rx_bytes': 2302, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512821, 'reachable_time': 27106, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 224289, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.572 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[e638b659-7ae9-4d2b-8a3e-2416609f15e0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 512835, 'tstamp': 512835}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 224290, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 512837, 'tstamp': 512837}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 224290, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.574 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.578 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39a43da9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.579 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.579 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39a43da9-c0, col_values=(('external_ids', {'iface-id': '5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:06:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:06:59.579 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.705 2 DEBUG nova.compute.manager [req-352938d3-738a-41f1-9307-e525db229ec7 req-afd734ab-647a-452b-b42a-1405d755f829 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Received event network-vif-unplugged-8848368e-f2a5-456a-b7a5-db2ffc930550 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.706 2 DEBUG oslo_concurrency.lockutils [req-352938d3-738a-41f1-9307-e525db229ec7 req-afd734ab-647a-452b-b42a-1405d755f829 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.707 2 DEBUG oslo_concurrency.lockutils [req-352938d3-738a-41f1-9307-e525db229ec7 req-afd734ab-647a-452b-b42a-1405d755f829 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.708 2 DEBUG oslo_concurrency.lockutils [req-352938d3-738a-41f1-9307-e525db229ec7 req-afd734ab-647a-452b-b42a-1405d755f829 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.709 2 DEBUG nova.compute.manager [req-352938d3-738a-41f1-9307-e525db229ec7 req-afd734ab-647a-452b-b42a-1405d755f829 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] No waiting events found dispatching network-vif-unplugged-8848368e-f2a5-456a-b7a5-db2ffc930550 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:06:59 np0005485008 nova_compute[192512]: 2025-10-13 16:06:59.709 2 DEBUG nova.compute.manager [req-352938d3-738a-41f1-9307-e525db229ec7 req-afd734ab-647a-452b-b42a-1405d755f829 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Received event network-vif-unplugged-8848368e-f2a5-456a-b7a5-db2ffc930550 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 12:07:00 np0005485008 nova_compute[192512]: 2025-10-13 16:07:00.269 2 DEBUG nova.network.neutron [-] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:07:00 np0005485008 nova_compute[192512]: 2025-10-13 16:07:00.284 2 INFO nova.compute.manager [-] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Took 0.93 seconds to deallocate network for instance.#033[00m
Oct 13 12:07:00 np0005485008 nova_compute[192512]: 2025-10-13 16:07:00.330 2 DEBUG oslo_concurrency.lockutils [None req-87dbcd9d-f0ba-4c24-9867-95479ea4b3bc 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:07:00 np0005485008 nova_compute[192512]: 2025-10-13 16:07:00.331 2 DEBUG oslo_concurrency.lockutils [None req-87dbcd9d-f0ba-4c24-9867-95479ea4b3bc 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:07:00 np0005485008 nova_compute[192512]: 2025-10-13 16:07:00.336 2 DEBUG nova.compute.manager [req-27d239f9-7c21-4bd0-b159-5021cd46ecd6 req-0cb06798-c400-4d27-b94d-bc986a4e02d8 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Received event network-vif-deleted-8848368e-f2a5-456a-b7a5-db2ffc930550 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:07:00 np0005485008 nova_compute[192512]: 2025-10-13 16:07:00.337 2 DEBUG oslo_concurrency.lockutils [None req-87dbcd9d-f0ba-4c24-9867-95479ea4b3bc 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:07:00 np0005485008 nova_compute[192512]: 2025-10-13 16:07:00.374 2 INFO nova.scheduler.client.report [None req-87dbcd9d-f0ba-4c24-9867-95479ea4b3bc 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Deleted allocations for instance 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9#033[00m
Oct 13 12:07:00 np0005485008 nova_compute[192512]: 2025-10-13 16:07:00.452 2 DEBUG oslo_concurrency.lockutils [None req-87dbcd9d-f0ba-4c24-9867-95479ea4b3bc 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.042 2 DEBUG oslo_concurrency.lockutils [None req-27b1bc1e-5430-439e-9d70-cbed527d8d8a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "0a1c6209-ffc3-440e-8934-3aa835a3a5f3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.043 2 DEBUG oslo_concurrency.lockutils [None req-27b1bc1e-5430-439e-9d70-cbed527d8d8a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "0a1c6209-ffc3-440e-8934-3aa835a3a5f3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.043 2 DEBUG oslo_concurrency.lockutils [None req-27b1bc1e-5430-439e-9d70-cbed527d8d8a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "0a1c6209-ffc3-440e-8934-3aa835a3a5f3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.043 2 DEBUG oslo_concurrency.lockutils [None req-27b1bc1e-5430-439e-9d70-cbed527d8d8a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "0a1c6209-ffc3-440e-8934-3aa835a3a5f3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.043 2 DEBUG oslo_concurrency.lockutils [None req-27b1bc1e-5430-439e-9d70-cbed527d8d8a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "0a1c6209-ffc3-440e-8934-3aa835a3a5f3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.044 2 INFO nova.compute.manager [None req-27b1bc1e-5430-439e-9d70-cbed527d8d8a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Terminating instance#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.045 2 DEBUG nova.compute.manager [None req-27b1bc1e-5430-439e-9d70-cbed527d8d8a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 13 12:07:01 np0005485008 kernel: tap3b1ec4ae-57 (unregistering): left promiscuous mode
Oct 13 12:07:01 np0005485008 NetworkManager[51587]: <info>  [1760371621.0710] device (tap3b1ec4ae-57): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 12:07:01 np0005485008 ovn_controller[94758]: 2025-10-13T16:07:01Z|00250|binding|INFO|Releasing lport 3b1ec4ae-579f-4e07-8443-31db62f8caff from this chassis (sb_readonly=0)
Oct 13 12:07:01 np0005485008 ovn_controller[94758]: 2025-10-13T16:07:01Z|00251|binding|INFO|Setting lport 3b1ec4ae-579f-4e07-8443-31db62f8caff down in Southbound
Oct 13 12:07:01 np0005485008 ovn_controller[94758]: 2025-10-13T16:07:01Z|00252|binding|INFO|Removing iface tap3b1ec4ae-57 ovn-installed in OVS
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:07:01.085 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:bb:95 10.100.0.8'], port_security=['fa:16:3e:82:bb:95 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0a1c6209-ffc3-440e-8934-3aa835a3a5f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '13', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=3b1ec4ae-579f-4e07-8443-31db62f8caff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:07:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:07:01.086 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 3b1ec4ae-579f-4e07-8443-31db62f8caff in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae unbound from our chassis#033[00m
Oct 13 12:07:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:07:01.087 103642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 13 12:07:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:07:01.088 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[439fde05-ec11-479d-9818-ae5d7f20a0b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:07:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:07:01.089 103642 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae namespace which is not needed anymore#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:01 np0005485008 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000015.scope: Deactivated successfully.
Oct 13 12:07:01 np0005485008 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000015.scope: Consumed 2.900s CPU time.
Oct 13 12:07:01 np0005485008 systemd-machined[152551]: Machine qemu-19-instance-00000015 terminated.
Oct 13 12:07:01 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[224045]: [NOTICE]   (224049) : haproxy version is 2.8.14-c23fe91
Oct 13 12:07:01 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[224045]: [NOTICE]   (224049) : path to executable is /usr/sbin/haproxy
Oct 13 12:07:01 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[224045]: [WARNING]  (224049) : Exiting Master process...
Oct 13 12:07:01 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[224045]: [ALERT]    (224049) : Current worker (224051) exited with code 143 (Terminated)
Oct 13 12:07:01 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[224045]: [WARNING]  (224049) : All workers exited. Exiting... (0)
Oct 13 12:07:01 np0005485008 systemd[1]: libpod-ec345dba6e9e8ed08e1ebcf5206f29c5d98f6a3151c06f8fcdf0a2e8c0b3b1a1.scope: Deactivated successfully.
Oct 13 12:07:01 np0005485008 podman[224314]: 2025-10-13 16:07:01.247324211 +0000 UTC m=+0.051553402 container died ec345dba6e9e8ed08e1ebcf5206f29c5d98f6a3151c06f8fcdf0a2e8c0b3b1a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 12:07:01 np0005485008 NetworkManager[51587]: <info>  [1760371621.2725] manager: (tap3b1ec4ae-57): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:01 np0005485008 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ec345dba6e9e8ed08e1ebcf5206f29c5d98f6a3151c06f8fcdf0a2e8c0b3b1a1-userdata-shm.mount: Deactivated successfully.
Oct 13 12:07:01 np0005485008 systemd[1]: var-lib-containers-storage-overlay-77ca346fd3bd3ccc0a4781c775c17978d156560190d23e5b0c60782decf03f85-merged.mount: Deactivated successfully.
Oct 13 12:07:01 np0005485008 podman[224314]: 2025-10-13 16:07:01.299274073 +0000 UTC m=+0.103503254 container cleanup ec345dba6e9e8ed08e1ebcf5206f29c5d98f6a3151c06f8fcdf0a2e8c0b3b1a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 12:07:01 np0005485008 systemd[1]: libpod-conmon-ec345dba6e9e8ed08e1ebcf5206f29c5d98f6a3151c06f8fcdf0a2e8c0b3b1a1.scope: Deactivated successfully.
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.327 2 INFO nova.virt.libvirt.driver [-] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Instance destroyed successfully.#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.329 2 DEBUG nova.objects.instance [None req-27b1bc1e-5430-439e-9d70-cbed527d8d8a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lazy-loading 'resources' on Instance uuid 0a1c6209-ffc3-440e-8934-3aa835a3a5f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.342 2 DEBUG nova.virt.libvirt.vif [None req-27b1bc1e-5430-439e-9d70-cbed527d8d8a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-13T16:04:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-2055945873',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-2055945873',id=21,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T16:05:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-1fkgi1bt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',clean_attempts='1',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T16:06:34Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=0a1c6209-ffc3-440e-8934-3aa835a3a5f3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b1ec4ae-579f-4e07-8443-31db62f8caff", "address": "fa:16:3e:82:bb:95", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b1ec4ae-57", "ovs_interfaceid": "3b1ec4ae-579f-4e07-8443-31db62f8caff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.343 2 DEBUG nova.network.os_vif_util [None req-27b1bc1e-5430-439e-9d70-cbed527d8d8a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converting VIF {"id": "3b1ec4ae-579f-4e07-8443-31db62f8caff", "address": "fa:16:3e:82:bb:95", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b1ec4ae-57", "ovs_interfaceid": "3b1ec4ae-579f-4e07-8443-31db62f8caff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.344 2 DEBUG nova.network.os_vif_util [None req-27b1bc1e-5430-439e-9d70-cbed527d8d8a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:82:bb:95,bridge_name='br-int',has_traffic_filtering=True,id=3b1ec4ae-579f-4e07-8443-31db62f8caff,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b1ec4ae-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.344 2 DEBUG os_vif [None req-27b1bc1e-5430-439e-9d70-cbed527d8d8a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:82:bb:95,bridge_name='br-int',has_traffic_filtering=True,id=3b1ec4ae-579f-4e07-8443-31db62f8caff,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b1ec4ae-57') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.347 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b1ec4ae-57, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.401 2 INFO os_vif [None req-27b1bc1e-5430-439e-9d70-cbed527d8d8a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:82:bb:95,bridge_name='br-int',has_traffic_filtering=True,id=3b1ec4ae-579f-4e07-8443-31db62f8caff,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b1ec4ae-57')#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.402 2 INFO nova.virt.libvirt.driver [None req-27b1bc1e-5430-439e-9d70-cbed527d8d8a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Deleting instance files /var/lib/nova/instances/0a1c6209-ffc3-440e-8934-3aa835a3a5f3_del#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.403 2 INFO nova.virt.libvirt.driver [None req-27b1bc1e-5430-439e-9d70-cbed527d8d8a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Deletion of /var/lib/nova/instances/0a1c6209-ffc3-440e-8934-3aa835a3a5f3_del complete#033[00m
Oct 13 12:07:01 np0005485008 podman[224362]: 2025-10-13 16:07:01.408636542 +0000 UTC m=+0.078225165 container remove ec345dba6e9e8ed08e1ebcf5206f29c5d98f6a3151c06f8fcdf0a2e8c0b3b1a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 13 12:07:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:07:01.415 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[39c544de-8669-44bd-beb6-29c3b4c0eb24]: (4, ('Mon Oct 13 04:07:01 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae (ec345dba6e9e8ed08e1ebcf5206f29c5d98f6a3151c06f8fcdf0a2e8c0b3b1a1)\nec345dba6e9e8ed08e1ebcf5206f29c5d98f6a3151c06f8fcdf0a2e8c0b3b1a1\nMon Oct 13 04:07:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae (ec345dba6e9e8ed08e1ebcf5206f29c5d98f6a3151c06f8fcdf0a2e8c0b3b1a1)\nec345dba6e9e8ed08e1ebcf5206f29c5d98f6a3151c06f8fcdf0a2e8c0b3b1a1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:07:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:07:01.418 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[54972b9f-e5b9-4cf9-a36f-f493defa3a6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:07:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:07:01.419 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:01 np0005485008 kernel: tap39a43da9-c0: left promiscuous mode
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:07:01.437 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[fdfe09cd-2333-44dd-a511-d94fd07a63b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.450 2 INFO nova.compute.manager [None req-27b1bc1e-5430-439e-9d70-cbed527d8d8a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Took 0.40 seconds to destroy the instance on the hypervisor.#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.451 2 DEBUG oslo.service.loopingcall [None req-27b1bc1e-5430-439e-9d70-cbed527d8d8a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.451 2 DEBUG nova.compute.manager [-] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.452 2 DEBUG nova.network.neutron [-] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 13 12:07:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:07:01.475 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[32010455-2b9a-4dce-b689-9ea3c91cd31d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:07:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:07:01.476 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[046c1b61-35b1-4f4e-a06b-a8804153ddcd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:07:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:07:01.494 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[74798fa4-c0c5-47af-9e04-e5e4d996788b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512812, 'reachable_time': 28595, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 224377, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:07:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:07:01.496 103757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 13 12:07:01 np0005485008 systemd[1]: run-netns-ovnmeta\x2d39a43da9\x2dcf4c\x2d4fe3\x2dab73\x2dbf8705320dae.mount: Deactivated successfully.
Oct 13 12:07:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:07:01.497 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[cbfff37f-0b2f-4e77-b3bb-e87be1b6d18a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.830 2 DEBUG nova.compute.manager [req-0393beb8-71ee-4094-881c-99bcfe8b7b0b req-95e68fa2-6a0a-4d41-90e6-65d430f5ea0d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Received event network-vif-plugged-8848368e-f2a5-456a-b7a5-db2ffc930550 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.831 2 DEBUG oslo_concurrency.lockutils [req-0393beb8-71ee-4094-881c-99bcfe8b7b0b req-95e68fa2-6a0a-4d41-90e6-65d430f5ea0d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.831 2 DEBUG oslo_concurrency.lockutils [req-0393beb8-71ee-4094-881c-99bcfe8b7b0b req-95e68fa2-6a0a-4d41-90e6-65d430f5ea0d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.831 2 DEBUG oslo_concurrency.lockutils [req-0393beb8-71ee-4094-881c-99bcfe8b7b0b req-95e68fa2-6a0a-4d41-90e6-65d430f5ea0d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.832 2 DEBUG nova.compute.manager [req-0393beb8-71ee-4094-881c-99bcfe8b7b0b req-95e68fa2-6a0a-4d41-90e6-65d430f5ea0d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] No waiting events found dispatching network-vif-plugged-8848368e-f2a5-456a-b7a5-db2ffc930550 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.832 2 WARNING nova.compute.manager [req-0393beb8-71ee-4094-881c-99bcfe8b7b0b req-95e68fa2-6a0a-4d41-90e6-65d430f5ea0d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Received unexpected event network-vif-plugged-8848368e-f2a5-456a-b7a5-db2ffc930550 for instance with vm_state deleted and task_state None.#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.832 2 DEBUG nova.compute.manager [req-0393beb8-71ee-4094-881c-99bcfe8b7b0b req-95e68fa2-6a0a-4d41-90e6-65d430f5ea0d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Received event network-vif-unplugged-3b1ec4ae-579f-4e07-8443-31db62f8caff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.832 2 DEBUG oslo_concurrency.lockutils [req-0393beb8-71ee-4094-881c-99bcfe8b7b0b req-95e68fa2-6a0a-4d41-90e6-65d430f5ea0d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "0a1c6209-ffc3-440e-8934-3aa835a3a5f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.833 2 DEBUG oslo_concurrency.lockutils [req-0393beb8-71ee-4094-881c-99bcfe8b7b0b req-95e68fa2-6a0a-4d41-90e6-65d430f5ea0d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "0a1c6209-ffc3-440e-8934-3aa835a3a5f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.833 2 DEBUG oslo_concurrency.lockutils [req-0393beb8-71ee-4094-881c-99bcfe8b7b0b req-95e68fa2-6a0a-4d41-90e6-65d430f5ea0d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "0a1c6209-ffc3-440e-8934-3aa835a3a5f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.833 2 DEBUG nova.compute.manager [req-0393beb8-71ee-4094-881c-99bcfe8b7b0b req-95e68fa2-6a0a-4d41-90e6-65d430f5ea0d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] No waiting events found dispatching network-vif-unplugged-3b1ec4ae-579f-4e07-8443-31db62f8caff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.833 2 DEBUG nova.compute.manager [req-0393beb8-71ee-4094-881c-99bcfe8b7b0b req-95e68fa2-6a0a-4d41-90e6-65d430f5ea0d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Received event network-vif-unplugged-3b1ec4ae-579f-4e07-8443-31db62f8caff for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.834 2 DEBUG nova.compute.manager [req-0393beb8-71ee-4094-881c-99bcfe8b7b0b req-95e68fa2-6a0a-4d41-90e6-65d430f5ea0d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Received event network-vif-plugged-3b1ec4ae-579f-4e07-8443-31db62f8caff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.834 2 DEBUG oslo_concurrency.lockutils [req-0393beb8-71ee-4094-881c-99bcfe8b7b0b req-95e68fa2-6a0a-4d41-90e6-65d430f5ea0d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "0a1c6209-ffc3-440e-8934-3aa835a3a5f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.834 2 DEBUG oslo_concurrency.lockutils [req-0393beb8-71ee-4094-881c-99bcfe8b7b0b req-95e68fa2-6a0a-4d41-90e6-65d430f5ea0d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "0a1c6209-ffc3-440e-8934-3aa835a3a5f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.834 2 DEBUG oslo_concurrency.lockutils [req-0393beb8-71ee-4094-881c-99bcfe8b7b0b req-95e68fa2-6a0a-4d41-90e6-65d430f5ea0d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "0a1c6209-ffc3-440e-8934-3aa835a3a5f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.834 2 DEBUG nova.compute.manager [req-0393beb8-71ee-4094-881c-99bcfe8b7b0b req-95e68fa2-6a0a-4d41-90e6-65d430f5ea0d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] No waiting events found dispatching network-vif-plugged-3b1ec4ae-579f-4e07-8443-31db62f8caff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.835 2 WARNING nova.compute.manager [req-0393beb8-71ee-4094-881c-99bcfe8b7b0b req-95e68fa2-6a0a-4d41-90e6-65d430f5ea0d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Received unexpected event network-vif-plugged-3b1ec4ae-579f-4e07-8443-31db62f8caff for instance with vm_state active and task_state deleting.#033[00m
Oct 13 12:07:01 np0005485008 nova_compute[192512]: 2025-10-13 16:07:01.993 2 DEBUG nova.network.neutron [-] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:07:02 np0005485008 nova_compute[192512]: 2025-10-13 16:07:02.013 2 INFO nova.compute.manager [-] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Took 0.56 seconds to deallocate network for instance.#033[00m
Oct 13 12:07:02 np0005485008 nova_compute[192512]: 2025-10-13 16:07:02.059 2 DEBUG oslo_concurrency.lockutils [None req-27b1bc1e-5430-439e-9d70-cbed527d8d8a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:07:02 np0005485008 nova_compute[192512]: 2025-10-13 16:07:02.060 2 DEBUG oslo_concurrency.lockutils [None req-27b1bc1e-5430-439e-9d70-cbed527d8d8a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:07:02 np0005485008 nova_compute[192512]: 2025-10-13 16:07:02.064 2 DEBUG oslo_concurrency.lockutils [None req-27b1bc1e-5430-439e-9d70-cbed527d8d8a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:07:02 np0005485008 nova_compute[192512]: 2025-10-13 16:07:02.090 2 INFO nova.scheduler.client.report [None req-27b1bc1e-5430-439e-9d70-cbed527d8d8a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Deleted allocations for instance 0a1c6209-ffc3-440e-8934-3aa835a3a5f3#033[00m
Oct 13 12:07:02 np0005485008 nova_compute[192512]: 2025-10-13 16:07:02.154 2 DEBUG oslo_concurrency.lockutils [None req-27b1bc1e-5430-439e-9d70-cbed527d8d8a 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "0a1c6209-ffc3-440e-8934-3aa835a3a5f3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:07:02 np0005485008 nova_compute[192512]: 2025-10-13 16:07:02.417 2 DEBUG nova.compute.manager [req-fdc0d912-d7e9-403b-b4d4-ca099c278606 req-82613d98-7b01-4cf7-a844-57c4f8464991 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Received event network-vif-deleted-3b1ec4ae-579f-4e07-8443-31db62f8caff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:07:05 np0005485008 podman[202884]: time="2025-10-13T16:07:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:07:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:07:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:07:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:07:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3010 "" "Go-http-client/1.1"
Oct 13 12:07:06 np0005485008 nova_compute[192512]: 2025-10-13 16:07:06.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:06 np0005485008 nova_compute[192512]: 2025-10-13 16:07:06.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:08 np0005485008 podman[224379]: 2025-10-13 16:07:08.768590781 +0000 UTC m=+0.066020539 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 12:07:08 np0005485008 podman[224378]: 2025-10-13 16:07:08.769230301 +0000 UTC m=+0.069481908 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, managed_by=edpm_ansible)
Oct 13 12:07:08 np0005485008 podman[224380]: 2025-10-13 16:07:08.806624813 +0000 UTC m=+0.099943341 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 12:07:08 np0005485008 podman[224381]: 2025-10-13 16:07:08.813612915 +0000 UTC m=+0.098816317 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 12:07:08 np0005485008 podman[224387]: 2025-10-13 16:07:08.848776096 +0000 UTC m=+0.135197566 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Oct 13 12:07:11 np0005485008 nova_compute[192512]: 2025-10-13 16:07:11.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:11 np0005485008 nova_compute[192512]: 2025-10-13 16:07:11.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:14 np0005485008 nova_compute[192512]: 2025-10-13 16:07:14.240 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760371619.238618, 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:07:14 np0005485008 nova_compute[192512]: 2025-10-13 16:07:14.241 2 INFO nova.compute.manager [-] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] VM Stopped (Lifecycle Event)#033[00m
Oct 13 12:07:14 np0005485008 nova_compute[192512]: 2025-10-13 16:07:14.260 2 DEBUG nova.compute.manager [None req-63a204a8-5053-459f-bab7-33a9f162bfbc - - - - - -] [instance: 74b7a5a3-9185-4bd3-b9a3-1dcdfae0f9f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:07:16 np0005485008 nova_compute[192512]: 2025-10-13 16:07:16.325 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760371621.3246183, 0a1c6209-ffc3-440e-8934-3aa835a3a5f3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:07:16 np0005485008 nova_compute[192512]: 2025-10-13 16:07:16.326 2 INFO nova.compute.manager [-] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] VM Stopped (Lifecycle Event)#033[00m
Oct 13 12:07:16 np0005485008 nova_compute[192512]: 2025-10-13 16:07:16.351 2 DEBUG nova.compute.manager [None req-95c7643d-fbed-41b0-9dbd-a52edb37b35a - - - - - -] [instance: 0a1c6209-ffc3-440e-8934-3aa835a3a5f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:07:16 np0005485008 nova_compute[192512]: 2025-10-13 16:07:16.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:16 np0005485008 nova_compute[192512]: 2025-10-13 16:07:16.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:07:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:07:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:07:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:07:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:07:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:07:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:07:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:07:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:07:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:07:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:07:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:07:20 np0005485008 nova_compute[192512]: 2025-10-13 16:07:20.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:07:20 np0005485008 nova_compute[192512]: 2025-10-13 16:07:20.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:07:20 np0005485008 nova_compute[192512]: 2025-10-13 16:07:20.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:07:21 np0005485008 nova_compute[192512]: 2025-10-13 16:07:21.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:21 np0005485008 nova_compute[192512]: 2025-10-13 16:07:21.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:22 np0005485008 podman[224478]: 2025-10-13 16:07:22.77683304 +0000 UTC m=+0.072984689 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, distribution-scope=public, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, config_id=edpm, vcs-type=git, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., architecture=x86_64, version=9.6, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 13 12:07:23 np0005485008 nova_compute[192512]: 2025-10-13 16:07:23.424 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:07:24 np0005485008 nova_compute[192512]: 2025-10-13 16:07:24.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:07:24 np0005485008 nova_compute[192512]: 2025-10-13 16:07:24.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:07:26 np0005485008 nova_compute[192512]: 2025-10-13 16:07:26.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:26 np0005485008 nova_compute[192512]: 2025-10-13 16:07:26.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:07:26 np0005485008 nova_compute[192512]: 2025-10-13 16:07:26.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:07:26 np0005485008 nova_compute[192512]: 2025-10-13 16:07:26.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:07:26 np0005485008 nova_compute[192512]: 2025-10-13 16:07:26.446 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:07:26 np0005485008 nova_compute[192512]: 2025-10-13 16:07:26.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:27 np0005485008 nova_compute[192512]: 2025-10-13 16:07:27.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:07:28 np0005485008 nova_compute[192512]: 2025-10-13 16:07:28.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:07:28 np0005485008 nova_compute[192512]: 2025-10-13 16:07:28.454 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:07:28 np0005485008 nova_compute[192512]: 2025-10-13 16:07:28.455 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:07:28 np0005485008 nova_compute[192512]: 2025-10-13 16:07:28.456 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:07:28 np0005485008 nova_compute[192512]: 2025-10-13 16:07:28.456 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:07:28 np0005485008 nova_compute[192512]: 2025-10-13 16:07:28.629 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:07:28 np0005485008 nova_compute[192512]: 2025-10-13 16:07:28.630 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5848MB free_disk=73.46338272094727GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:07:28 np0005485008 nova_compute[192512]: 2025-10-13 16:07:28.631 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:07:28 np0005485008 nova_compute[192512]: 2025-10-13 16:07:28.631 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:07:28 np0005485008 nova_compute[192512]: 2025-10-13 16:07:28.703 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:07:28 np0005485008 nova_compute[192512]: 2025-10-13 16:07:28.704 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:07:28 np0005485008 nova_compute[192512]: 2025-10-13 16:07:28.725 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:07:28 np0005485008 nova_compute[192512]: 2025-10-13 16:07:28.741 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:07:28 np0005485008 nova_compute[192512]: 2025-10-13 16:07:28.764 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:07:28 np0005485008 nova_compute[192512]: 2025-10-13 16:07:28.765 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:07:29 np0005485008 nova_compute[192512]: 2025-10-13 16:07:29.766 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:07:31 np0005485008 nova_compute[192512]: 2025-10-13 16:07:31.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:31 np0005485008 nova_compute[192512]: 2025-10-13 16:07:31.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:31 np0005485008 ovn_controller[94758]: 2025-10-13T16:07:31Z|00253|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 13 12:07:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:07:33.973 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:07:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:07:33.974 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:07:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:07:33.974 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:07:35 np0005485008 podman[202884]: time="2025-10-13T16:07:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:07:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:07:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:07:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:07:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3006 "" "Go-http-client/1.1"
Oct 13 12:07:36 np0005485008 nova_compute[192512]: 2025-10-13 16:07:36.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:36 np0005485008 nova_compute[192512]: 2025-10-13 16:07:36.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:07:36 np0005485008 nova_compute[192512]: 2025-10-13 16:07:36.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:39 np0005485008 podman[224502]: 2025-10-13 16:07:39.777859422 +0000 UTC m=+0.073010090 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 12:07:39 np0005485008 podman[224507]: 2025-10-13 16:07:39.786261688 +0000 UTC m=+0.066168265 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009)
Oct 13 12:07:39 np0005485008 podman[224503]: 2025-10-13 16:07:39.788784677 +0000 UTC m=+0.077551414 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid)
Oct 13 12:07:39 np0005485008 podman[224515]: 2025-10-13 16:07:39.798459463 +0000 UTC m=+0.071620266 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 12:07:39 np0005485008 podman[224517]: 2025-10-13 16:07:39.831492577 +0000 UTC m=+0.097749512 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 12:07:41 np0005485008 nova_compute[192512]: 2025-10-13 16:07:41.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:41 np0005485008 nova_compute[192512]: 2025-10-13 16:07:41.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:46 np0005485008 nova_compute[192512]: 2025-10-13 16:07:46.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:46 np0005485008 nova_compute[192512]: 2025-10-13 16:07:46.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:07:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:07:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:07:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:07:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:07:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:07:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:07:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:07:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:07:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:07:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:07:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:07:51 np0005485008 nova_compute[192512]: 2025-10-13 16:07:51.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:51 np0005485008 nova_compute[192512]: 2025-10-13 16:07:51.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:53 np0005485008 podman[224598]: 2025-10-13 16:07:53.75502471 +0000 UTC m=+0.062321633 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public)
Oct 13 12:07:54 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:07:54.766 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:07:54 np0005485008 nova_compute[192512]: 2025-10-13 16:07:54.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:54 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:07:54.767 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 12:07:54 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:07:54.768 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:07:56 np0005485008 nova_compute[192512]: 2025-10-13 16:07:56.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:07:56 np0005485008 nova_compute[192512]: 2025-10-13 16:07:56.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:08:01 np0005485008 nova_compute[192512]: 2025-10-13 16:08:01.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:08:01 np0005485008 nova_compute[192512]: 2025-10-13 16:08:01.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:08:05 np0005485008 podman[202884]: time="2025-10-13T16:08:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:08:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:08:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:08:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:08:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3003 "" "Go-http-client/1.1"
Oct 13 12:08:06 np0005485008 nova_compute[192512]: 2025-10-13 16:08:06.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:08:06 np0005485008 nova_compute[192512]: 2025-10-13 16:08:06.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:08:10 np0005485008 podman[224623]: 2025-10-13 16:08:10.785963114 +0000 UTC m=+0.071884121 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 12:08:10 np0005485008 podman[224624]: 2025-10-13 16:08:10.796323474 +0000 UTC m=+0.071507659 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 12:08:10 np0005485008 podman[224621]: 2025-10-13 16:08:10.810442494 +0000 UTC m=+0.093896243 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 12:08:10 np0005485008 podman[224622]: 2025-10-13 16:08:10.826380831 +0000 UTC m=+0.105181811 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=iscsid)
Oct 13 12:08:10 np0005485008 podman[224625]: 2025-10-13 16:08:10.833131636 +0000 UTC m=+0.106840744 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 12:08:11 np0005485008 nova_compute[192512]: 2025-10-13 16:08:11.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:08:11 np0005485008 nova_compute[192512]: 2025-10-13 16:08:11.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:08:16 np0005485008 nova_compute[192512]: 2025-10-13 16:08:16.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:08:16 np0005485008 nova_compute[192512]: 2025-10-13 16:08:16.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:08:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:08:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:08:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:08:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:08:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:08:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:08:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:08:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:08:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:08:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:08:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:08:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:08:20 np0005485008 nova_compute[192512]: 2025-10-13 16:08:20.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:08:20 np0005485008 nova_compute[192512]: 2025-10-13 16:08:20.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:08:21 np0005485008 nova_compute[192512]: 2025-10-13 16:08:21.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:08:21 np0005485008 nova_compute[192512]: 2025-10-13 16:08:21.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:08:22 np0005485008 nova_compute[192512]: 2025-10-13 16:08:22.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:08:23 np0005485008 nova_compute[192512]: 2025-10-13 16:08:23.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:08:24 np0005485008 nova_compute[192512]: 2025-10-13 16:08:24.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:08:24 np0005485008 podman[224723]: 2025-10-13 16:08:24.764560431 +0000 UTC m=+0.067711817 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.buildah.version=1.33.7, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, build-date=2025-08-20T13:12:41, vcs-type=git)
Oct 13 12:08:25 np0005485008 nova_compute[192512]: 2025-10-13 16:08:25.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:08:26 np0005485008 nova_compute[192512]: 2025-10-13 16:08:26.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:08:26 np0005485008 nova_compute[192512]: 2025-10-13 16:08:26.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:08:28 np0005485008 nova_compute[192512]: 2025-10-13 16:08:28.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:08:28 np0005485008 nova_compute[192512]: 2025-10-13 16:08:28.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:08:28 np0005485008 nova_compute[192512]: 2025-10-13 16:08:28.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:08:28 np0005485008 nova_compute[192512]: 2025-10-13 16:08:28.442 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:08:28 np0005485008 nova_compute[192512]: 2025-10-13 16:08:28.443 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:08:29 np0005485008 nova_compute[192512]: 2025-10-13 16:08:29.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:08:30 np0005485008 nova_compute[192512]: 2025-10-13 16:08:30.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:08:30 np0005485008 nova_compute[192512]: 2025-10-13 16:08:30.450 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:08:30 np0005485008 nova_compute[192512]: 2025-10-13 16:08:30.451 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:08:30 np0005485008 nova_compute[192512]: 2025-10-13 16:08:30.451 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:08:30 np0005485008 nova_compute[192512]: 2025-10-13 16:08:30.451 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:08:30 np0005485008 nova_compute[192512]: 2025-10-13 16:08:30.636 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:08:30 np0005485008 nova_compute[192512]: 2025-10-13 16:08:30.637 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5878MB free_disk=73.46343231201172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:08:30 np0005485008 nova_compute[192512]: 2025-10-13 16:08:30.637 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:08:30 np0005485008 nova_compute[192512]: 2025-10-13 16:08:30.637 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:08:30 np0005485008 nova_compute[192512]: 2025-10-13 16:08:30.717 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:08:30 np0005485008 nova_compute[192512]: 2025-10-13 16:08:30.717 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:08:30 np0005485008 nova_compute[192512]: 2025-10-13 16:08:30.754 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing inventories for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 13 12:08:30 np0005485008 nova_compute[192512]: 2025-10-13 16:08:30.797 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating ProviderTree inventory for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 13 12:08:30 np0005485008 nova_compute[192512]: 2025-10-13 16:08:30.797 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating inventory in ProviderTree for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 13 12:08:30 np0005485008 nova_compute[192512]: 2025-10-13 16:08:30.811 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing aggregate associations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 13 12:08:30 np0005485008 nova_compute[192512]: 2025-10-13 16:08:30.840 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing trait associations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce, traits: HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,HW_CPU_X86_SSE42,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,HW_CPU_X86_BMI2,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 13 12:08:30 np0005485008 nova_compute[192512]: 2025-10-13 16:08:30.856 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:08:30 np0005485008 nova_compute[192512]: 2025-10-13 16:08:30.872 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:08:30 np0005485008 nova_compute[192512]: 2025-10-13 16:08:30.874 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:08:30 np0005485008 nova_compute[192512]: 2025-10-13 16:08:30.874 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:08:31 np0005485008 nova_compute[192512]: 2025-10-13 16:08:31.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:08:31 np0005485008 nova_compute[192512]: 2025-10-13 16:08:31.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:08:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:08:33.974 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:08:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:08:33.975 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:08:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:08:33.975 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:08:35 np0005485008 podman[202884]: time="2025-10-13T16:08:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:08:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:08:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:08:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:08:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3009 "" "Go-http-client/1.1"
Oct 13 12:08:36 np0005485008 nova_compute[192512]: 2025-10-13 16:08:36.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:08:36 np0005485008 nova_compute[192512]: 2025-10-13 16:08:36.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:08:41 np0005485008 nova_compute[192512]: 2025-10-13 16:08:41.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:08:41 np0005485008 podman[224747]: 2025-10-13 16:08:41.76454693 +0000 UTC m=+0.055878990 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Oct 13 12:08:41 np0005485008 podman[224745]: 2025-10-13 16:08:41.767552316 +0000 UTC m=+0.068472191 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 12:08:41 np0005485008 podman[224746]: 2025-10-13 16:08:41.769172177 +0000 UTC m=+0.065746853 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 12:08:41 np0005485008 podman[224749]: 2025-10-13 16:08:41.792149729 +0000 UTC m=+0.077427906 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 12:08:41 np0005485008 podman[224754]: 2025-10-13 16:08:41.82072253 +0000 UTC m=+0.106031898 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 13 12:08:41 np0005485008 nova_compute[192512]: 2025-10-13 16:08:41.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:08:44 np0005485008 ovn_controller[94758]: 2025-10-13T16:08:44Z|00254|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Oct 13 12:08:45 np0005485008 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 13 12:08:46 np0005485008 nova_compute[192512]: 2025-10-13 16:08:46.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:08:46 np0005485008 nova_compute[192512]: 2025-10-13 16:08:46.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:08:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:08:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:08:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:08:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:08:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:08:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:08:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:08:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:08:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:08:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:08:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:08:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:08:51 np0005485008 nova_compute[192512]: 2025-10-13 16:08:51.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:08:51 np0005485008 nova_compute[192512]: 2025-10-13 16:08:51.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:08:55 np0005485008 podman[224848]: 2025-10-13 16:08:55.75235905 +0000 UTC m=+0.055294643 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vendor=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, release=1755695350, container_name=openstack_network_exporter, distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 13 12:08:56 np0005485008 nova_compute[192512]: 2025-10-13 16:08:56.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:08:56 np0005485008 nova_compute[192512]: 2025-10-13 16:08:56.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:01 np0005485008 nova_compute[192512]: 2025-10-13 16:09:01.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:01 np0005485008 nova_compute[192512]: 2025-10-13 16:09:01.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:05 np0005485008 podman[202884]: time="2025-10-13T16:09:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:09:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:09:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:09:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:09:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3009 "" "Go-http-client/1.1"
Oct 13 12:09:06 np0005485008 nova_compute[192512]: 2025-10-13 16:09:06.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:06 np0005485008 nova_compute[192512]: 2025-10-13 16:09:06.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:11 np0005485008 nova_compute[192512]: 2025-10-13 16:09:11.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:11 np0005485008 nova_compute[192512]: 2025-10-13 16:09:11.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:12 np0005485008 podman[224871]: 2025-10-13 16:09:12.778817592 +0000 UTC m=+0.070956961 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 12:09:12 np0005485008 podman[224872]: 2025-10-13 16:09:12.779312908 +0000 UTC m=+0.072328725 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, managed_by=edpm_ansible)
Oct 13 12:09:12 np0005485008 podman[224880]: 2025-10-13 16:09:12.805519892 +0000 UTC m=+0.084936615 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 12:09:12 np0005485008 podman[224873]: 2025-10-13 16:09:12.822072069 +0000 UTC m=+0.102650500 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 13 12:09:12 np0005485008 podman[224885]: 2025-10-13 16:09:12.831275403 +0000 UTC m=+0.100865234 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 12:09:16 np0005485008 nova_compute[192512]: 2025-10-13 16:09:16.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:16 np0005485008 nova_compute[192512]: 2025-10-13 16:09:16.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:09:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:09:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:09:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:09:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:09:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:09:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:09:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:09:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:09:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:09:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:09:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:09:21 np0005485008 nova_compute[192512]: 2025-10-13 16:09:21.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:21 np0005485008 nova_compute[192512]: 2025-10-13 16:09:21.874 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:09:21 np0005485008 nova_compute[192512]: 2025-10-13 16:09:21.875 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:09:21 np0005485008 nova_compute[192512]: 2025-10-13 16:09:21.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:23 np0005485008 nova_compute[192512]: 2025-10-13 16:09:23.430 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:09:23 np0005485008 nova_compute[192512]: 2025-10-13 16:09:23.947 2 DEBUG nova.virt.libvirt.driver [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Creating tmpfile /var/lib/nova/instances/tmp2pj8jyps to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Oct 13 12:09:23 np0005485008 nova_compute[192512]: 2025-10-13 16:09:23.972 2 DEBUG nova.virt.libvirt.driver [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Creating tmpfile /var/lib/nova/instances/tmpssvq52n2 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Oct 13 12:09:24 np0005485008 nova_compute[192512]: 2025-10-13 16:09:24.059 2 DEBUG nova.compute.manager [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp2pj8jyps',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Oct 13 12:09:24 np0005485008 nova_compute[192512]: 2025-10-13 16:09:24.082 2 DEBUG nova.compute.manager [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpssvq52n2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Oct 13 12:09:24 np0005485008 nova_compute[192512]: 2025-10-13 16:09:24.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:09:25 np0005485008 nova_compute[192512]: 2025-10-13 16:09:25.107 2 DEBUG nova.compute.manager [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp2pj8jyps',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='f9754727-88a8-41a9-a7bb-63bb67701c46',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Oct 13 12:09:25 np0005485008 nova_compute[192512]: 2025-10-13 16:09:25.138 2 DEBUG oslo_concurrency.lockutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-f9754727-88a8-41a9-a7bb-63bb67701c46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:09:25 np0005485008 nova_compute[192512]: 2025-10-13 16:09:25.138 2 DEBUG oslo_concurrency.lockutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-f9754727-88a8-41a9-a7bb-63bb67701c46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:09:25 np0005485008 nova_compute[192512]: 2025-10-13 16:09:25.139 2 DEBUG nova.network.neutron [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 12:09:25 np0005485008 nova_compute[192512]: 2025-10-13 16:09:25.426 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:09:25 np0005485008 nova_compute[192512]: 2025-10-13 16:09:25.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.249 2 DEBUG nova.network.neutron [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Updating instance_info_cache with network_info: [{"id": "5fe592c8-7d19-4183-bf59-ce0005fa0c0d", "address": "fa:16:3e:e2:c8:6f", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fe592c8-7d", "ovs_interfaceid": "5fe592c8-7d19-4183-bf59-ce0005fa0c0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.271 2 DEBUG oslo_concurrency.lockutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-f9754727-88a8-41a9-a7bb-63bb67701c46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.273 2 DEBUG nova.virt.libvirt.driver [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp2pj8jyps',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='f9754727-88a8-41a9-a7bb-63bb67701c46',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.273 2 DEBUG nova.virt.libvirt.driver [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Creating instance directory: /var/lib/nova/instances/f9754727-88a8-41a9-a7bb-63bb67701c46 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.273 2 DEBUG nova.virt.libvirt.driver [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Creating disk.info with the contents: {'/var/lib/nova/instances/f9754727-88a8-41a9-a7bb-63bb67701c46/disk': 'qcow2', '/var/lib/nova/instances/f9754727-88a8-41a9-a7bb-63bb67701c46/disk.config': 'raw'} pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10854#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.274 2 DEBUG nova.virt.libvirt.driver [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10864#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.274 2 DEBUG nova.objects.instance [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid f9754727-88a8-41a9-a7bb-63bb67701c46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.302 2 DEBUG oslo_concurrency.processutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.403 2 DEBUG oslo_concurrency.processutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.404 2 DEBUG oslo_concurrency.lockutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.405 2 DEBUG oslo_concurrency.lockutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.422 2 DEBUG oslo_concurrency.processutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.518 2 DEBUG oslo_concurrency.processutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.519 2 DEBUG oslo_concurrency.processutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/f9754727-88a8-41a9-a7bb-63bb67701c46/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.573 2 DEBUG oslo_concurrency.processutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/f9754727-88a8-41a9-a7bb-63bb67701c46/disk 1073741824" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.574 2 DEBUG oslo_concurrency.lockutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.574 2 DEBUG oslo_concurrency.processutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.663 2 DEBUG oslo_concurrency.processutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.664 2 DEBUG nova.virt.disk.api [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Checking if we can resize image /var/lib/nova/instances/f9754727-88a8-41a9-a7bb-63bb67701c46/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.665 2 DEBUG oslo_concurrency.processutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9754727-88a8-41a9-a7bb-63bb67701c46/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.725 2 DEBUG oslo_concurrency.processutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9754727-88a8-41a9-a7bb-63bb67701c46/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.727 2 DEBUG nova.virt.disk.api [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Cannot resize image /var/lib/nova/instances/f9754727-88a8-41a9-a7bb-63bb67701c46/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.728 2 DEBUG nova.objects.instance [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'migration_context' on Instance uuid f9754727-88a8-41a9-a7bb-63bb67701c46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.743 2 DEBUG oslo_concurrency.processutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/f9754727-88a8-41a9-a7bb-63bb67701c46/disk.config 485376 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.768 2 DEBUG oslo_concurrency.processutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/f9754727-88a8-41a9-a7bb-63bb67701c46/disk.config 485376" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.771 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Copying file compute-0.ctlplane.example.com:/var/lib/nova/instances/f9754727-88a8-41a9-a7bb-63bb67701c46/disk.config to /var/lib/nova/instances/f9754727-88a8-41a9-a7bb-63bb67701c46 copy_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:103#033[00m
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.771 2 DEBUG oslo_concurrency.processutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/f9754727-88a8-41a9-a7bb-63bb67701c46/disk.config /var/lib/nova/instances/f9754727-88a8-41a9-a7bb-63bb67701c46 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:09:26 np0005485008 podman[224985]: 2025-10-13 16:09:26.801333998 +0000 UTC m=+0.096540896 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=edpm, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, architecture=x86_64, container_name=openstack_network_exporter)
Oct 13 12:09:26 np0005485008 nova_compute[192512]: 2025-10-13 16:09:26.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:27 np0005485008 nova_compute[192512]: 2025-10-13 16:09:27.262 2 DEBUG oslo_concurrency.processutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/f9754727-88a8-41a9-a7bb-63bb67701c46/disk.config /var/lib/nova/instances/f9754727-88a8-41a9-a7bb-63bb67701c46" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:09:27 np0005485008 nova_compute[192512]: 2025-10-13 16:09:27.263 2 DEBUG nova.virt.libvirt.driver [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Oct 13 12:09:27 np0005485008 nova_compute[192512]: 2025-10-13 16:09:27.264 2 DEBUG nova.virt.libvirt.vif [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T16:08:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1872308486',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1872308486',id=24,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T16:08:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-2n98f1x3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T16:08:22Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=f9754727-88a8-41a9-a7bb-63bb67701c46,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5fe592c8-7d19-4183-bf59-ce0005fa0c0d", "address": "fa:16:3e:e2:c8:6f", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap5fe592c8-7d", "ovs_interfaceid": "5fe592c8-7d19-4183-bf59-ce0005fa0c0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 12:09:27 np0005485008 nova_compute[192512]: 2025-10-13 16:09:27.264 2 DEBUG nova.network.os_vif_util [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converting VIF {"id": "5fe592c8-7d19-4183-bf59-ce0005fa0c0d", "address": "fa:16:3e:e2:c8:6f", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap5fe592c8-7d", "ovs_interfaceid": "5fe592c8-7d19-4183-bf59-ce0005fa0c0d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:09:27 np0005485008 nova_compute[192512]: 2025-10-13 16:09:27.265 2 DEBUG nova.network.os_vif_util [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:c8:6f,bridge_name='br-int',has_traffic_filtering=True,id=5fe592c8-7d19-4183-bf59-ce0005fa0c0d,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fe592c8-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:09:27 np0005485008 nova_compute[192512]: 2025-10-13 16:09:27.266 2 DEBUG os_vif [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:c8:6f,bridge_name='br-int',has_traffic_filtering=True,id=5fe592c8-7d19-4183-bf59-ce0005fa0c0d,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fe592c8-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 12:09:27 np0005485008 nova_compute[192512]: 2025-10-13 16:09:27.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:27 np0005485008 nova_compute[192512]: 2025-10-13 16:09:27.267 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:09:27 np0005485008 nova_compute[192512]: 2025-10-13 16:09:27.267 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:09:27 np0005485008 nova_compute[192512]: 2025-10-13 16:09:27.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:27 np0005485008 nova_compute[192512]: 2025-10-13 16:09:27.270 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5fe592c8-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:09:27 np0005485008 nova_compute[192512]: 2025-10-13 16:09:27.270 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5fe592c8-7d, col_values=(('external_ids', {'iface-id': '5fe592c8-7d19-4183-bf59-ce0005fa0c0d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:c8:6f', 'vm-uuid': 'f9754727-88a8-41a9-a7bb-63bb67701c46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:09:27 np0005485008 nova_compute[192512]: 2025-10-13 16:09:27.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:27 np0005485008 NetworkManager[51587]: <info>  [1760371767.2732] manager: (tap5fe592c8-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Oct 13 12:09:27 np0005485008 nova_compute[192512]: 2025-10-13 16:09:27.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:09:27 np0005485008 nova_compute[192512]: 2025-10-13 16:09:27.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:27 np0005485008 nova_compute[192512]: 2025-10-13 16:09:27.280 2 INFO os_vif [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:c8:6f,bridge_name='br-int',has_traffic_filtering=True,id=5fe592c8-7d19-4183-bf59-ce0005fa0c0d,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fe592c8-7d')#033[00m
Oct 13 12:09:27 np0005485008 nova_compute[192512]: 2025-10-13 16:09:27.280 2 DEBUG nova.virt.libvirt.driver [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Oct 13 12:09:27 np0005485008 nova_compute[192512]: 2025-10-13 16:09:27.280 2 DEBUG nova.compute.manager [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp2pj8jyps',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='f9754727-88a8-41a9-a7bb-63bb67701c46',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Oct 13 12:09:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:29.357 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:09:29 np0005485008 nova_compute[192512]: 2025-10-13 16:09:29.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:29.360 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 12:09:29 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:29.362 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:09:29 np0005485008 nova_compute[192512]: 2025-10-13 16:09:29.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:09:29 np0005485008 nova_compute[192512]: 2025-10-13 16:09:29.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:09:29 np0005485008 nova_compute[192512]: 2025-10-13 16:09:29.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:09:29 np0005485008 nova_compute[192512]: 2025-10-13 16:09:29.446 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:09:29 np0005485008 nova_compute[192512]: 2025-10-13 16:09:29.447 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:09:29 np0005485008 nova_compute[192512]: 2025-10-13 16:09:29.887 2 DEBUG nova.network.neutron [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Port 5fe592c8-7d19-4183-bf59-ce0005fa0c0d updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Oct 13 12:09:29 np0005485008 nova_compute[192512]: 2025-10-13 16:09:29.890 2 DEBUG nova.compute.manager [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp2pj8jyps',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='f9754727-88a8-41a9-a7bb-63bb67701c46',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Oct 13 12:09:30 np0005485008 systemd[1]: Starting libvirt proxy daemon...
Oct 13 12:09:30 np0005485008 systemd[1]: Started libvirt proxy daemon.
Oct 13 12:09:30 np0005485008 kernel: tap5fe592c8-7d: entered promiscuous mode
Oct 13 12:09:30 np0005485008 NetworkManager[51587]: <info>  [1760371770.2045] manager: (tap5fe592c8-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/91)
Oct 13 12:09:30 np0005485008 ovn_controller[94758]: 2025-10-13T16:09:30Z|00255|binding|INFO|Claiming lport 5fe592c8-7d19-4183-bf59-ce0005fa0c0d for this additional chassis.
Oct 13 12:09:30 np0005485008 ovn_controller[94758]: 2025-10-13T16:09:30Z|00256|binding|INFO|5fe592c8-7d19-4183-bf59-ce0005fa0c0d: Claiming fa:16:3e:e2:c8:6f 10.100.0.6
Oct 13 12:09:30 np0005485008 nova_compute[192512]: 2025-10-13 16:09:30.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:30 np0005485008 ovn_controller[94758]: 2025-10-13T16:09:30Z|00257|binding|INFO|Setting lport 5fe592c8-7d19-4183-bf59-ce0005fa0c0d ovn-installed in OVS
Oct 13 12:09:30 np0005485008 nova_compute[192512]: 2025-10-13 16:09:30.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:30 np0005485008 nova_compute[192512]: 2025-10-13 16:09:30.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:30 np0005485008 systemd-udevd[225045]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 12:09:30 np0005485008 systemd-machined[152551]: New machine qemu-21-instance-00000018.
Oct 13 12:09:30 np0005485008 systemd[1]: Started Virtual Machine qemu-21-instance-00000018.
Oct 13 12:09:30 np0005485008 NetworkManager[51587]: <info>  [1760371770.2598] device (tap5fe592c8-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 12:09:30 np0005485008 NetworkManager[51587]: <info>  [1760371770.2614] device (tap5fe592c8-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 12:09:30 np0005485008 nova_compute[192512]: 2025-10-13 16:09:30.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:09:31 np0005485008 nova_compute[192512]: 2025-10-13 16:09:31.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:09:31 np0005485008 nova_compute[192512]: 2025-10-13 16:09:31.457 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:09:31 np0005485008 nova_compute[192512]: 2025-10-13 16:09:31.458 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:09:31 np0005485008 nova_compute[192512]: 2025-10-13 16:09:31.459 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:09:31 np0005485008 nova_compute[192512]: 2025-10-13 16:09:31.459 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:09:31 np0005485008 nova_compute[192512]: 2025-10-13 16:09:31.473 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760371771.4733431, f9754727-88a8-41a9-a7bb-63bb67701c46 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:09:31 np0005485008 nova_compute[192512]: 2025-10-13 16:09:31.474 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] VM Started (Lifecycle Event)#033[00m
Oct 13 12:09:31 np0005485008 nova_compute[192512]: 2025-10-13 16:09:31.503 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:09:31 np0005485008 nova_compute[192512]: 2025-10-13 16:09:31.542 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9754727-88a8-41a9-a7bb-63bb67701c46/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:09:31 np0005485008 nova_compute[192512]: 2025-10-13 16:09:31.660 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9754727-88a8-41a9-a7bb-63bb67701c46/disk --force-share --output=json" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:09:31 np0005485008 nova_compute[192512]: 2025-10-13 16:09:31.661 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9754727-88a8-41a9-a7bb-63bb67701c46/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:09:31 np0005485008 nova_compute[192512]: 2025-10-13 16:09:31.722 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f9754727-88a8-41a9-a7bb-63bb67701c46/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:09:31 np0005485008 nova_compute[192512]: 2025-10-13 16:09:31.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:31 np0005485008 nova_compute[192512]: 2025-10-13 16:09:31.882 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:09:31 np0005485008 nova_compute[192512]: 2025-10-13 16:09:31.883 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5848MB free_disk=73.46268081665039GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:09:31 np0005485008 nova_compute[192512]: 2025-10-13 16:09:31.884 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:09:31 np0005485008 nova_compute[192512]: 2025-10-13 16:09:31.884 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:09:31 np0005485008 nova_compute[192512]: 2025-10-13 16:09:31.937 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Migration for instance 3712da29-1024-46ae-b142-57fa5083baa0 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Oct 13 12:09:31 np0005485008 nova_compute[192512]: 2025-10-13 16:09:31.938 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Migration for instance f9754727-88a8-41a9-a7bb-63bb67701c46 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Oct 13 12:09:31 np0005485008 nova_compute[192512]: 2025-10-13 16:09:31.980 2 INFO nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Updating resource usage from migration fe2d6c93-75c8-4821-b786-7443f6f026aa#033[00m
Oct 13 12:09:31 np0005485008 nova_compute[192512]: 2025-10-13 16:09:31.981 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Starting to track incoming migration fe2d6c93-75c8-4821-b786-7443f6f026aa with flavor ddff5c88-cac9-460e-8ffb-1e9b9a7c2a59 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Oct 13 12:09:32 np0005485008 nova_compute[192512]: 2025-10-13 16:09:32.000 2 INFO nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Updating resource usage from migration 2dcf415f-15b2-4efc-9edd-9b67f1d6f2e6#033[00m
Oct 13 12:09:32 np0005485008 nova_compute[192512]: 2025-10-13 16:09:32.000 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Starting to track incoming migration 2dcf415f-15b2-4efc-9edd-9b67f1d6f2e6 with flavor ddff5c88-cac9-460e-8ffb-1e9b9a7c2a59 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Oct 13 12:09:32 np0005485008 nova_compute[192512]: 2025-10-13 16:09:32.065 2 WARNING nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance f9754727-88a8-41a9-a7bb-63bb67701c46 has been moved to another host compute-0.ctlplane.example.com(compute-0.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.#033[00m
Oct 13 12:09:32 np0005485008 nova_compute[192512]: 2025-10-13 16:09:32.088 2 WARNING nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance 3712da29-1024-46ae-b142-57fa5083baa0 has been moved to another host compute-0.ctlplane.example.com(compute-0.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.#033[00m
Oct 13 12:09:32 np0005485008 nova_compute[192512]: 2025-10-13 16:09:32.089 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:09:32 np0005485008 nova_compute[192512]: 2025-10-13 16:09:32.089 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:09:32 np0005485008 nova_compute[192512]: 2025-10-13 16:09:32.153 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760371772.1512594, f9754727-88a8-41a9-a7bb-63bb67701c46 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:09:32 np0005485008 nova_compute[192512]: 2025-10-13 16:09:32.153 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] VM Resumed (Lifecycle Event)#033[00m
Oct 13 12:09:32 np0005485008 nova_compute[192512]: 2025-10-13 16:09:32.161 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:09:32 np0005485008 nova_compute[192512]: 2025-10-13 16:09:32.178 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:09:32 np0005485008 nova_compute[192512]: 2025-10-13 16:09:32.180 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:09:32 np0005485008 nova_compute[192512]: 2025-10-13 16:09:32.186 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 12:09:32 np0005485008 nova_compute[192512]: 2025-10-13 16:09:32.207 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:09:32 np0005485008 nova_compute[192512]: 2025-10-13 16:09:32.207 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:09:32 np0005485008 nova_compute[192512]: 2025-10-13 16:09:32.208 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct 13 12:09:32 np0005485008 nova_compute[192512]: 2025-10-13 16:09:32.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:33 np0005485008 ovn_controller[94758]: 2025-10-13T16:09:33Z|00258|binding|INFO|Claiming lport 5fe592c8-7d19-4183-bf59-ce0005fa0c0d for this chassis.
Oct 13 12:09:33 np0005485008 ovn_controller[94758]: 2025-10-13T16:09:33Z|00259|binding|INFO|5fe592c8-7d19-4183-bf59-ce0005fa0c0d: Claiming fa:16:3e:e2:c8:6f 10.100.0.6
Oct 13 12:09:33 np0005485008 ovn_controller[94758]: 2025-10-13T16:09:33Z|00260|binding|INFO|Setting lport 5fe592c8-7d19-4183-bf59-ce0005fa0c0d up in Southbound
Oct 13 12:09:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:33.905 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:c8:6f 10.100.0.6'], port_security=['fa:16:3e:e2:c8:6f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f9754727-88a8-41a9-a7bb-63bb67701c46', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=5fe592c8-7d19-4183-bf59-ce0005fa0c0d) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:09:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:33.908 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 5fe592c8-7d19-4183-bf59-ce0005fa0c0d in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae bound to our chassis#033[00m
Oct 13 12:09:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:33.911 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae#033[00m
Oct 13 12:09:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:33.929 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[8c480c54-213f-4a2b-a7f8-298bf8348f54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:33.930 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap39a43da9-c1 in ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 13 12:09:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:33.934 214965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap39a43da9-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 13 12:09:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:33.935 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[ff438fd8-beef-4ec0-83d6-9063e704d9da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:33.936 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[07b5c2f0-88c7-4c4c-8ceb-f8d54b867b71]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:33.952 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[488f6028-003f-491c-826f-a4e48f5f3f1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:33.970 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[de2cf675-cb22-4555-874b-b96381620e0c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:33.975 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:09:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:33.975 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:09:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:33.976 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:34.001 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[d004ae41-8d3b-45e0-a3de-9218402cdd24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:34.008 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[1ff4c14e-df3b-495f-9764-d185dd66aa35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:34 np0005485008 NetworkManager[51587]: <info>  [1760371774.0109] manager: (tap39a43da9-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/92)
Oct 13 12:09:34 np0005485008 systemd-udevd[225090]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:34.047 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[d3686d8a-ada6-424e-a26d-426c0284d9cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:34.051 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[a2725fcf-1d5e-4cde-8bb2-f9c0c7700d53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:34 np0005485008 NetworkManager[51587]: <info>  [1760371774.0859] device (tap39a43da9-c0): carrier: link connected
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:34.092 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[1122d5c6-1859-421f-8628-24cc3142fdc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:34 np0005485008 nova_compute[192512]: 2025-10-13 16:09:34.095 2 INFO nova.compute.manager [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Post operation of migration started#033[00m
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:34.112 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[08bf0617-d5bd-4c49-966c-4eb804b6292e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531175, 'reachable_time': 41822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 225109, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:34.131 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[f5860ee3-d555-4891-89ea-6f6f1e478235]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:43e9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 531175, 'tstamp': 531175}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 225110, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:34.157 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ae67be-00b5-4b5d-914a-7c5af869c9f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531175, 'reachable_time': 41822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 225111, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:34.191 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[991937e1-66bf-401c-adc7-6985dd6e8ba2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:34.254 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[af70f8fb-0543-4c80-9662-3ada3ced162f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:34.256 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:34.256 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:34.257 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39a43da9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:09:34 np0005485008 NetworkManager[51587]: <info>  [1760371774.2599] manager: (tap39a43da9-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Oct 13 12:09:34 np0005485008 nova_compute[192512]: 2025-10-13 16:09:34.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:34 np0005485008 kernel: tap39a43da9-c0: entered promiscuous mode
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:34.262 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39a43da9-c0, col_values=(('external_ids', {'iface-id': '5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:09:34 np0005485008 ovn_controller[94758]: 2025-10-13T16:09:34Z|00261|binding|INFO|Releasing lport 5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182 from this chassis (sb_readonly=0)
Oct 13 12:09:34 np0005485008 nova_compute[192512]: 2025-10-13 16:09:34.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:34 np0005485008 nova_compute[192512]: 2025-10-13 16:09:34.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:34.281 103642 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/39a43da9-cf4c-4fe3-ab73-bf8705320dae.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/39a43da9-cf4c-4fe3-ab73-bf8705320dae.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:34.282 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[122eef14-17f3-44f4-8856-3fe37c0363af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:34.283 103642 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]: global
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]:    log         /dev/log local0 debug
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]:    log-tag     haproxy-metadata-proxy-39a43da9-cf4c-4fe3-ab73-bf8705320dae
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]:    user        root
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]:    group       root
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]:    maxconn     1024
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]:    pidfile     /var/lib/neutron/external/pids/39a43da9-cf4c-4fe3-ab73-bf8705320dae.pid.haproxy
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]:    daemon
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]: defaults
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]:    log global
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]:    mode http
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]:    option httplog
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]:    option dontlognull
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]:    option http-server-close
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]:    option forwardfor
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]:    retries                 3
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]:    timeout http-request    30s
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]:    timeout connect         30s
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]:    timeout client          32s
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]:    timeout server          32s
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]:    timeout http-keep-alive 30s
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]: listen listener
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]:    bind 169.254.169.254:80
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]:    server metadata /var/lib/neutron/metadata_proxy
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]:    http-request add-header X-OVN-Network-ID 39a43da9-cf4c-4fe3-ab73-bf8705320dae
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 13 12:09:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:34.284 103642 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'env', 'PROCESS_TAG=haproxy-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/39a43da9-cf4c-4fe3-ab73-bf8705320dae.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 13 12:09:34 np0005485008 nova_compute[192512]: 2025-10-13 16:09:34.665 2 DEBUG oslo_concurrency.lockutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-f9754727-88a8-41a9-a7bb-63bb67701c46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:09:34 np0005485008 nova_compute[192512]: 2025-10-13 16:09:34.666 2 DEBUG oslo_concurrency.lockutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-f9754727-88a8-41a9-a7bb-63bb67701c46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:09:34 np0005485008 nova_compute[192512]: 2025-10-13 16:09:34.667 2 DEBUG nova.network.neutron [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 12:09:34 np0005485008 podman[225144]: 2025-10-13 16:09:34.671531903 +0000 UTC m=+0.060034383 container create b0df7c8606391c0baacde6e05fd489a61440058c3deb390416f5eb1b4e203cfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 12:09:34 np0005485008 systemd[1]: Started libpod-conmon-b0df7c8606391c0baacde6e05fd489a61440058c3deb390416f5eb1b4e203cfa.scope.
Oct 13 12:09:34 np0005485008 podman[225144]: 2025-10-13 16:09:34.637177609 +0000 UTC m=+0.025680159 image pull f5d8e43d5fd113645e71c7e8980543c233298a7561a4c95ab3af27cb119e70b7 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 13 12:09:34 np0005485008 systemd[1]: Started libcrun container.
Oct 13 12:09:34 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38a32304c81dfd82e16d6a816bb5e93e61a7063939cb077a20949c9e0d07ad9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 12:09:34 np0005485008 podman[225144]: 2025-10-13 16:09:34.759408252 +0000 UTC m=+0.147910752 container init b0df7c8606391c0baacde6e05fd489a61440058c3deb390416f5eb1b4e203cfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 12:09:34 np0005485008 podman[225144]: 2025-10-13 16:09:34.764924527 +0000 UTC m=+0.153426997 container start b0df7c8606391c0baacde6e05fd489a61440058c3deb390416f5eb1b4e203cfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 13 12:09:34 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[225158]: [NOTICE]   (225162) : New worker (225164) forked
Oct 13 12:09:34 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[225158]: [NOTICE]   (225162) : Loading success.
Oct 13 12:09:35 np0005485008 podman[202884]: time="2025-10-13T16:09:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:09:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:09:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20779 "" "Go-http-client/1.1"
Oct 13 12:09:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:09:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3466 "" "Go-http-client/1.1"
Oct 13 12:09:35 np0005485008 nova_compute[192512]: 2025-10-13 16:09:35.893 2 DEBUG nova.network.neutron [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Updating instance_info_cache with network_info: [{"id": "5fe592c8-7d19-4183-bf59-ce0005fa0c0d", "address": "fa:16:3e:e2:c8:6f", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fe592c8-7d", "ovs_interfaceid": "5fe592c8-7d19-4183-bf59-ce0005fa0c0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:09:35 np0005485008 nova_compute[192512]: 2025-10-13 16:09:35.914 2 DEBUG oslo_concurrency.lockutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-f9754727-88a8-41a9-a7bb-63bb67701c46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:09:35 np0005485008 nova_compute[192512]: 2025-10-13 16:09:35.933 2 DEBUG oslo_concurrency.lockutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:09:35 np0005485008 nova_compute[192512]: 2025-10-13 16:09:35.934 2 DEBUG oslo_concurrency.lockutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:09:35 np0005485008 nova_compute[192512]: 2025-10-13 16:09:35.934 2 DEBUG oslo_concurrency.lockutils [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:09:35 np0005485008 nova_compute[192512]: 2025-10-13 16:09:35.939 2 INFO nova.virt.libvirt.driver [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Oct 13 12:09:35 np0005485008 virtqemud[192082]: Domain id=21 name='instance-00000018' uuid=f9754727-88a8-41a9-a7bb-63bb67701c46 is tainted: custom-monitor
Oct 13 12:09:36 np0005485008 nova_compute[192512]: 2025-10-13 16:09:36.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:36 np0005485008 nova_compute[192512]: 2025-10-13 16:09:36.947 2 INFO nova.virt.libvirt.driver [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Oct 13 12:09:37 np0005485008 nova_compute[192512]: 2025-10-13 16:09:37.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:37 np0005485008 nova_compute[192512]: 2025-10-13 16:09:37.953 2 INFO nova.virt.libvirt.driver [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Oct 13 12:09:37 np0005485008 nova_compute[192512]: 2025-10-13 16:09:37.960 2 DEBUG nova.compute.manager [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:09:37 np0005485008 nova_compute[192512]: 2025-10-13 16:09:37.981 2 DEBUG nova.objects.instance [None req-b7d25515-9515-4795-8f78-a488430c7232 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 13 12:09:38 np0005485008 nova_compute[192512]: 2025-10-13 16:09:38.202 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:09:40 np0005485008 nova_compute[192512]: 2025-10-13 16:09:40.291 2 DEBUG nova.compute.manager [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpssvq52n2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='3712da29-1024-46ae-b142-57fa5083baa0',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Oct 13 12:09:40 np0005485008 nova_compute[192512]: 2025-10-13 16:09:40.321 2 DEBUG oslo_concurrency.lockutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-3712da29-1024-46ae-b142-57fa5083baa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:09:40 np0005485008 nova_compute[192512]: 2025-10-13 16:09:40.322 2 DEBUG oslo_concurrency.lockutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-3712da29-1024-46ae-b142-57fa5083baa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:09:40 np0005485008 nova_compute[192512]: 2025-10-13 16:09:40.322 2 DEBUG nova.network.neutron [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 12:09:41 np0005485008 nova_compute[192512]: 2025-10-13 16:09:41.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.383 2 DEBUG nova.network.neutron [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Updating instance_info_cache with network_info: [{"id": "a56ca6e7-1a3f-4108-88da-cf00466e652a", "address": "fa:16:3e:4e:d4:db", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa56ca6e7-1a", "ovs_interfaceid": "a56ca6e7-1a3f-4108-88da-cf00466e652a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.409 2 DEBUG oslo_concurrency.lockutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-3712da29-1024-46ae-b142-57fa5083baa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.413 2 DEBUG nova.virt.libvirt.driver [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpssvq52n2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='3712da29-1024-46ae-b142-57fa5083baa0',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.414 2 DEBUG nova.virt.libvirt.driver [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Creating instance directory: /var/lib/nova/instances/3712da29-1024-46ae-b142-57fa5083baa0 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.415 2 DEBUG nova.virt.libvirt.driver [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Creating disk.info with the contents: {'/var/lib/nova/instances/3712da29-1024-46ae-b142-57fa5083baa0/disk': 'qcow2', '/var/lib/nova/instances/3712da29-1024-46ae-b142-57fa5083baa0/disk.config': 'raw'} pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10854#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.416 2 DEBUG nova.virt.libvirt.driver [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10864#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.416 2 DEBUG nova.objects.instance [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 3712da29-1024-46ae-b142-57fa5083baa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.447 2 DEBUG oslo_concurrency.processutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.534 2 DEBUG oslo_concurrency.processutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.536 2 DEBUG oslo_concurrency.lockutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.536 2 DEBUG oslo_concurrency.lockutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.547 2 DEBUG oslo_concurrency.processutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.621 2 DEBUG oslo_concurrency.processutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.623 2 DEBUG oslo_concurrency.processutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/3712da29-1024-46ae-b142-57fa5083baa0/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.660 2 DEBUG oslo_concurrency.processutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/3712da29-1024-46ae-b142-57fa5083baa0/disk 1073741824" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.661 2 DEBUG oslo_concurrency.lockutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.662 2 DEBUG oslo_concurrency.processutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.724 2 DEBUG oslo_concurrency.processutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.725 2 DEBUG nova.virt.disk.api [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Checking if we can resize image /var/lib/nova/instances/3712da29-1024-46ae-b142-57fa5083baa0/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.726 2 DEBUG oslo_concurrency.processutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3712da29-1024-46ae-b142-57fa5083baa0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.786 2 DEBUG oslo_concurrency.processutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3712da29-1024-46ae-b142-57fa5083baa0/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.788 2 DEBUG nova.virt.disk.api [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Cannot resize image /var/lib/nova/instances/3712da29-1024-46ae-b142-57fa5083baa0/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.788 2 DEBUG nova.objects.instance [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'migration_context' on Instance uuid 3712da29-1024-46ae-b142-57fa5083baa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.808 2 DEBUG oslo_concurrency.processutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/3712da29-1024-46ae-b142-57fa5083baa0/disk.config 485376 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.840 2 DEBUG oslo_concurrency.processutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/3712da29-1024-46ae-b142-57fa5083baa0/disk.config 485376" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.842 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Copying file compute-0.ctlplane.example.com:/var/lib/nova/instances/3712da29-1024-46ae-b142-57fa5083baa0/disk.config to /var/lib/nova/instances/3712da29-1024-46ae-b142-57fa5083baa0 copy_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:103#033[00m
Oct 13 12:09:42 np0005485008 nova_compute[192512]: 2025-10-13 16:09:42.842 2 DEBUG oslo_concurrency.processutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/3712da29-1024-46ae-b142-57fa5083baa0/disk.config /var/lib/nova/instances/3712da29-1024-46ae-b142-57fa5083baa0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:09:43 np0005485008 nova_compute[192512]: 2025-10-13 16:09:43.278 2 DEBUG oslo_concurrency.processutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/3712da29-1024-46ae-b142-57fa5083baa0/disk.config /var/lib/nova/instances/3712da29-1024-46ae-b142-57fa5083baa0" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:09:43 np0005485008 nova_compute[192512]: 2025-10-13 16:09:43.280 2 DEBUG nova.virt.libvirt.driver [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Oct 13 12:09:43 np0005485008 nova_compute[192512]: 2025-10-13 16:09:43.282 2 DEBUG nova.virt.libvirt.vif [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T16:07:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-612787496',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-612787496',id=23,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T16:08:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-14ccsl2p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T16:08:02Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=3712da29-1024-46ae-b142-57fa5083baa0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a56ca6e7-1a3f-4108-88da-cf00466e652a", "address": "fa:16:3e:4e:d4:db", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapa56ca6e7-1a", "ovs_interfaceid": "a56ca6e7-1a3f-4108-88da-cf00466e652a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 12:09:43 np0005485008 nova_compute[192512]: 2025-10-13 16:09:43.283 2 DEBUG nova.network.os_vif_util [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converting VIF {"id": "a56ca6e7-1a3f-4108-88da-cf00466e652a", "address": "fa:16:3e:4e:d4:db", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapa56ca6e7-1a", "ovs_interfaceid": "a56ca6e7-1a3f-4108-88da-cf00466e652a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:09:43 np0005485008 nova_compute[192512]: 2025-10-13 16:09:43.285 2 DEBUG nova.network.os_vif_util [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4e:d4:db,bridge_name='br-int',has_traffic_filtering=True,id=a56ca6e7-1a3f-4108-88da-cf00466e652a,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa56ca6e7-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:09:43 np0005485008 nova_compute[192512]: 2025-10-13 16:09:43.285 2 DEBUG os_vif [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:d4:db,bridge_name='br-int',has_traffic_filtering=True,id=a56ca6e7-1a3f-4108-88da-cf00466e652a,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa56ca6e7-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 12:09:43 np0005485008 nova_compute[192512]: 2025-10-13 16:09:43.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:43 np0005485008 nova_compute[192512]: 2025-10-13 16:09:43.288 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:09:43 np0005485008 nova_compute[192512]: 2025-10-13 16:09:43.288 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:09:43 np0005485008 nova_compute[192512]: 2025-10-13 16:09:43.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:43 np0005485008 nova_compute[192512]: 2025-10-13 16:09:43.293 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa56ca6e7-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:09:43 np0005485008 nova_compute[192512]: 2025-10-13 16:09:43.294 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa56ca6e7-1a, col_values=(('external_ids', {'iface-id': 'a56ca6e7-1a3f-4108-88da-cf00466e652a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4e:d4:db', 'vm-uuid': '3712da29-1024-46ae-b142-57fa5083baa0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:09:43 np0005485008 nova_compute[192512]: 2025-10-13 16:09:43.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:43 np0005485008 NetworkManager[51587]: <info>  [1760371783.2986] manager: (tapa56ca6e7-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/94)
Oct 13 12:09:43 np0005485008 nova_compute[192512]: 2025-10-13 16:09:43.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:09:43 np0005485008 nova_compute[192512]: 2025-10-13 16:09:43.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:43 np0005485008 nova_compute[192512]: 2025-10-13 16:09:43.307 2 INFO os_vif [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:d4:db,bridge_name='br-int',has_traffic_filtering=True,id=a56ca6e7-1a3f-4108-88da-cf00466e652a,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa56ca6e7-1a')#033[00m
Oct 13 12:09:43 np0005485008 nova_compute[192512]: 2025-10-13 16:09:43.308 2 DEBUG nova.virt.libvirt.driver [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Oct 13 12:09:43 np0005485008 nova_compute[192512]: 2025-10-13 16:09:43.308 2 DEBUG nova.compute.manager [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpssvq52n2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='3712da29-1024-46ae-b142-57fa5083baa0',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Oct 13 12:09:43 np0005485008 podman[225198]: 2025-10-13 16:09:43.779996546 +0000 UTC m=+0.068853234 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 12:09:43 np0005485008 podman[225199]: 2025-10-13 16:09:43.786270556 +0000 UTC m=+0.072276053 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 12:09:43 np0005485008 podman[225196]: 2025-10-13 16:09:43.80022593 +0000 UTC m=+0.089871043 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 12:09:43 np0005485008 podman[225197]: 2025-10-13 16:09:43.823777821 +0000 UTC m=+0.115029585 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 13 12:09:43 np0005485008 podman[225200]: 2025-10-13 16:09:43.828826761 +0000 UTC m=+0.105447909 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 13 12:09:46 np0005485008 nova_compute[192512]: 2025-10-13 16:09:46.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:47 np0005485008 nova_compute[192512]: 2025-10-13 16:09:47.452 2 DEBUG nova.network.neutron [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Port a56ca6e7-1a3f-4108-88da-cf00466e652a updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Oct 13 12:09:47 np0005485008 nova_compute[192512]: 2025-10-13 16:09:47.453 2 DEBUG nova.compute.manager [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpssvq52n2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='3712da29-1024-46ae-b142-57fa5083baa0',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Oct 13 12:09:47 np0005485008 NetworkManager[51587]: <info>  [1760371787.6821] manager: (tapa56ca6e7-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/95)
Oct 13 12:09:47 np0005485008 kernel: tapa56ca6e7-1a: entered promiscuous mode
Oct 13 12:09:47 np0005485008 nova_compute[192512]: 2025-10-13 16:09:47.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:47 np0005485008 ovn_controller[94758]: 2025-10-13T16:09:47Z|00262|binding|INFO|Claiming lport a56ca6e7-1a3f-4108-88da-cf00466e652a for this additional chassis.
Oct 13 12:09:47 np0005485008 ovn_controller[94758]: 2025-10-13T16:09:47Z|00263|binding|INFO|a56ca6e7-1a3f-4108-88da-cf00466e652a: Claiming fa:16:3e:4e:d4:db 10.100.0.10
Oct 13 12:09:47 np0005485008 ovn_controller[94758]: 2025-10-13T16:09:47Z|00264|binding|INFO|Setting lport a56ca6e7-1a3f-4108-88da-cf00466e652a ovn-installed in OVS
Oct 13 12:09:47 np0005485008 nova_compute[192512]: 2025-10-13 16:09:47.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:47 np0005485008 systemd-udevd[225309]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 12:09:47 np0005485008 systemd-machined[152551]: New machine qemu-22-instance-00000017.
Oct 13 12:09:47 np0005485008 NetworkManager[51587]: <info>  [1760371787.7300] device (tapa56ca6e7-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 12:09:47 np0005485008 NetworkManager[51587]: <info>  [1760371787.7310] device (tapa56ca6e7-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 12:09:47 np0005485008 systemd[1]: Started Virtual Machine qemu-22-instance-00000017.
Oct 13 12:09:48 np0005485008 nova_compute[192512]: 2025-10-13 16:09:48.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:48 np0005485008 nova_compute[192512]: 2025-10-13 16:09:48.655 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760371788.6546888, 3712da29-1024-46ae-b142-57fa5083baa0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:09:48 np0005485008 nova_compute[192512]: 2025-10-13 16:09:48.655 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] VM Started (Lifecycle Event)#033[00m
Oct 13 12:09:48 np0005485008 nova_compute[192512]: 2025-10-13 16:09:48.678 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:09:49 np0005485008 nova_compute[192512]: 2025-10-13 16:09:49.373 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760371789.3733337, 3712da29-1024-46ae-b142-57fa5083baa0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:09:49 np0005485008 nova_compute[192512]: 2025-10-13 16:09:49.374 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] VM Resumed (Lifecycle Event)#033[00m
Oct 13 12:09:49 np0005485008 nova_compute[192512]: 2025-10-13 16:09:49.412 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:09:49 np0005485008 nova_compute[192512]: 2025-10-13 16:09:49.417 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 12:09:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:09:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:09:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:09:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:09:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:09:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:09:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:09:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:09:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:09:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:09:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:09:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:09:49 np0005485008 nova_compute[192512]: 2025-10-13 16:09:49.457 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct 13 12:09:51 np0005485008 ovn_controller[94758]: 2025-10-13T16:09:51Z|00265|binding|INFO|Claiming lport a56ca6e7-1a3f-4108-88da-cf00466e652a for this chassis.
Oct 13 12:09:51 np0005485008 ovn_controller[94758]: 2025-10-13T16:09:51Z|00266|binding|INFO|a56ca6e7-1a3f-4108-88da-cf00466e652a: Claiming fa:16:3e:4e:d4:db 10.100.0.10
Oct 13 12:09:51 np0005485008 ovn_controller[94758]: 2025-10-13T16:09:51Z|00267|binding|INFO|Setting lport a56ca6e7-1a3f-4108-88da-cf00466e652a up in Southbound
Oct 13 12:09:51 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:51.186 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:d4:db 10.100.0.10'], port_security=['fa:16:3e:4e:d4:db 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '3712da29-1024-46ae-b142-57fa5083baa0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=a56ca6e7-1a3f-4108-88da-cf00466e652a) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:09:51 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:51.188 103642 INFO neutron.agent.ovn.metadata.agent [-] Port a56ca6e7-1a3f-4108-88da-cf00466e652a in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae bound to our chassis#033[00m
Oct 13 12:09:51 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:51.189 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae#033[00m
Oct 13 12:09:51 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:51.211 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[ac01e58a-b0bf-4731-9542-cb8c0ccb7fe0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:51 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:51.264 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[061a6e59-8932-4df0-ae4e-dc269e0dd977]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:51 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:51.268 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[bc4650c4-feff-4a04-b93e-13b1fc853014]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:51 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:51.314 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[fd5830b9-8601-4720-a5ad-3de6fa8fa14a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:51 np0005485008 nova_compute[192512]: 2025-10-13 16:09:51.319 2 INFO nova.compute.manager [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Post operation of migration started#033[00m
Oct 13 12:09:51 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:51.334 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[2f03f89d-3f56-4d44-9617-aaa043a064cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 28, 'tx_packets': 5, 'rx_bytes': 1672, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 28, 'tx_packets': 5, 'rx_bytes': 1672, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531175, 'reachable_time': 33061, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 225345, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:51 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:51.354 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd5b455-3f89-4732-b4b5-012cc78baa15]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 531188, 'tstamp': 531188}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 225346, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 531191, 'tstamp': 531191}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 225346, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:51 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:51.356 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:09:51 np0005485008 nova_compute[192512]: 2025-10-13 16:09:51.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:51 np0005485008 nova_compute[192512]: 2025-10-13 16:09:51.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:51 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:51.360 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39a43da9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:09:51 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:51.361 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:09:51 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:51.361 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39a43da9-c0, col_values=(('external_ids', {'iface-id': '5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:09:51 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:51.362 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:09:51 np0005485008 nova_compute[192512]: 2025-10-13 16:09:51.638 2 DEBUG oslo_concurrency.lockutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-3712da29-1024-46ae-b142-57fa5083baa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:09:51 np0005485008 nova_compute[192512]: 2025-10-13 16:09:51.639 2 DEBUG oslo_concurrency.lockutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-3712da29-1024-46ae-b142-57fa5083baa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:09:51 np0005485008 nova_compute[192512]: 2025-10-13 16:09:51.640 2 DEBUG nova.network.neutron [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 12:09:51 np0005485008 nova_compute[192512]: 2025-10-13 16:09:51.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:53 np0005485008 nova_compute[192512]: 2025-10-13 16:09:53.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:53 np0005485008 nova_compute[192512]: 2025-10-13 16:09:53.569 2 DEBUG nova.network.neutron [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Updating instance_info_cache with network_info: [{"id": "a56ca6e7-1a3f-4108-88da-cf00466e652a", "address": "fa:16:3e:4e:d4:db", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa56ca6e7-1a", "ovs_interfaceid": "a56ca6e7-1a3f-4108-88da-cf00466e652a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:09:53 np0005485008 nova_compute[192512]: 2025-10-13 16:09:53.600 2 DEBUG oslo_concurrency.lockutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-3712da29-1024-46ae-b142-57fa5083baa0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:09:53 np0005485008 nova_compute[192512]: 2025-10-13 16:09:53.620 2 DEBUG oslo_concurrency.lockutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:09:53 np0005485008 nova_compute[192512]: 2025-10-13 16:09:53.621 2 DEBUG oslo_concurrency.lockutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:09:53 np0005485008 nova_compute[192512]: 2025-10-13 16:09:53.621 2 DEBUG oslo_concurrency.lockutils [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:09:53 np0005485008 nova_compute[192512]: 2025-10-13 16:09:53.627 2 INFO nova.virt.libvirt.driver [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Oct 13 12:09:53 np0005485008 virtqemud[192082]: Domain id=22 name='instance-00000017' uuid=3712da29-1024-46ae-b142-57fa5083baa0 is tainted: custom-monitor
Oct 13 12:09:54 np0005485008 nova_compute[192512]: 2025-10-13 16:09:54.636 2 INFO nova.virt.libvirt.driver [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Oct 13 12:09:55 np0005485008 nova_compute[192512]: 2025-10-13 16:09:55.644 2 INFO nova.virt.libvirt.driver [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Oct 13 12:09:55 np0005485008 nova_compute[192512]: 2025-10-13 16:09:55.651 2 DEBUG nova.compute.manager [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:09:55 np0005485008 nova_compute[192512]: 2025-10-13 16:09:55.671 2 DEBUG nova.objects.instance [None req-59420a7e-35d4-45f3-93e9-91684f6df281 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 13 12:09:56 np0005485008 nova_compute[192512]: 2025-10-13 16:09:56.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:57 np0005485008 podman[225347]: 2025-10-13 16:09:57.768347933 +0000 UTC m=+0.066791538 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, release=1755695350, config_id=edpm, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 12:09:58 np0005485008 nova_compute[192512]: 2025-10-13 16:09:58.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:58 np0005485008 nova_compute[192512]: 2025-10-13 16:09:58.863 2 DEBUG oslo_concurrency.lockutils [None req-92a4bdab-831d-4c9c-8691-4401e92f924c 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "f9754727-88a8-41a9-a7bb-63bb67701c46" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:09:58 np0005485008 nova_compute[192512]: 2025-10-13 16:09:58.864 2 DEBUG oslo_concurrency.lockutils [None req-92a4bdab-831d-4c9c-8691-4401e92f924c 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "f9754727-88a8-41a9-a7bb-63bb67701c46" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:09:58 np0005485008 nova_compute[192512]: 2025-10-13 16:09:58.865 2 DEBUG oslo_concurrency.lockutils [None req-92a4bdab-831d-4c9c-8691-4401e92f924c 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "f9754727-88a8-41a9-a7bb-63bb67701c46-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:09:58 np0005485008 nova_compute[192512]: 2025-10-13 16:09:58.865 2 DEBUG oslo_concurrency.lockutils [None req-92a4bdab-831d-4c9c-8691-4401e92f924c 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "f9754727-88a8-41a9-a7bb-63bb67701c46-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:09:58 np0005485008 nova_compute[192512]: 2025-10-13 16:09:58.865 2 DEBUG oslo_concurrency.lockutils [None req-92a4bdab-831d-4c9c-8691-4401e92f924c 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "f9754727-88a8-41a9-a7bb-63bb67701c46-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:09:58 np0005485008 nova_compute[192512]: 2025-10-13 16:09:58.867 2 INFO nova.compute.manager [None req-92a4bdab-831d-4c9c-8691-4401e92f924c 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Terminating instance#033[00m
Oct 13 12:09:58 np0005485008 nova_compute[192512]: 2025-10-13 16:09:58.868 2 DEBUG nova.compute.manager [None req-92a4bdab-831d-4c9c-8691-4401e92f924c 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 13 12:09:58 np0005485008 kernel: tap5fe592c8-7d (unregistering): left promiscuous mode
Oct 13 12:09:58 np0005485008 NetworkManager[51587]: <info>  [1760371798.8956] device (tap5fe592c8-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 12:09:58 np0005485008 nova_compute[192512]: 2025-10-13 16:09:58.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:58 np0005485008 ovn_controller[94758]: 2025-10-13T16:09:58Z|00268|binding|INFO|Releasing lport 5fe592c8-7d19-4183-bf59-ce0005fa0c0d from this chassis (sb_readonly=0)
Oct 13 12:09:58 np0005485008 ovn_controller[94758]: 2025-10-13T16:09:58Z|00269|binding|INFO|Setting lport 5fe592c8-7d19-4183-bf59-ce0005fa0c0d down in Southbound
Oct 13 12:09:58 np0005485008 ovn_controller[94758]: 2025-10-13T16:09:58Z|00270|binding|INFO|Removing iface tap5fe592c8-7d ovn-installed in OVS
Oct 13 12:09:58 np0005485008 nova_compute[192512]: 2025-10-13 16:09:58.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:58.911 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:c8:6f 10.100.0.6'], port_security=['fa:16:3e:e2:c8:6f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f9754727-88a8-41a9-a7bb-63bb67701c46', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '13', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=5fe592c8-7d19-4183-bf59-ce0005fa0c0d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:09:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:58.912 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 5fe592c8-7d19-4183-bf59-ce0005fa0c0d in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae unbound from our chassis#033[00m
Oct 13 12:09:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:58.913 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae#033[00m
Oct 13 12:09:58 np0005485008 nova_compute[192512]: 2025-10-13 16:09:58.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:58.935 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[58823dad-3858-4595-a9ce-8c260457a02b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:58 np0005485008 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000018.scope: Deactivated successfully.
Oct 13 12:09:58 np0005485008 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000018.scope: Consumed 2.847s CPU time.
Oct 13 12:09:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:58.965 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[e367d678-73d9-4dc1-b2ea-71504efb8fe2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:58 np0005485008 systemd-machined[152551]: Machine qemu-21-instance-00000018 terminated.
Oct 13 12:09:58 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:58.969 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[93214074-a82d-40af-8aa9-a929093ec179]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:59.003 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[0718b17b-26c0-402c-9353-dd4dbabfef61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:59.054 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[23a53e4a-ae6f-4698-a968-150776a3dac6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39a43da9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:43:e9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 43, 'tx_packets': 7, 'rx_bytes': 2302, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 43, 'tx_packets': 7, 'rx_bytes': 2302, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531175, 'reachable_time': 33061, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 225379, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:59.075 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[ff5d2d68-2ffa-4559-9470-86d1260eec12]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 531188, 'tstamp': 531188}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 225380, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap39a43da9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 531191, 'tstamp': 531191}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 225380, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:09:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:59.077 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:59.086 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39a43da9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:09:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:59.087 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:09:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:59.087 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39a43da9-c0, col_values=(('external_ids', {'iface-id': '5d5d0d7a-9d7b-48ad-b095-bf9e3e12e182'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:09:59 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:09:59.087 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.146 2 INFO nova.virt.libvirt.driver [-] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Instance destroyed successfully.#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.147 2 DEBUG nova.objects.instance [None req-92a4bdab-831d-4c9c-8691-4401e92f924c 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lazy-loading 'resources' on Instance uuid f9754727-88a8-41a9-a7bb-63bb67701c46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.164 2 DEBUG nova.virt.libvirt.vif [None req-92a4bdab-831d-4c9c-8691-4401e92f924c 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-13T16:08:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-1872308486',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-1872308486',id=24,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T16:08:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-2n98f1x3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',clean_attempts='1',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T16:09:38Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=f9754727-88a8-41a9-a7bb-63bb67701c46,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5fe592c8-7d19-4183-bf59-ce0005fa0c0d", "address": "fa:16:3e:e2:c8:6f", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fe592c8-7d", "ovs_interfaceid": "5fe592c8-7d19-4183-bf59-ce0005fa0c0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.164 2 DEBUG nova.network.os_vif_util [None req-92a4bdab-831d-4c9c-8691-4401e92f924c 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converting VIF {"id": "5fe592c8-7d19-4183-bf59-ce0005fa0c0d", "address": "fa:16:3e:e2:c8:6f", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fe592c8-7d", "ovs_interfaceid": "5fe592c8-7d19-4183-bf59-ce0005fa0c0d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.164 2 DEBUG nova.network.os_vif_util [None req-92a4bdab-831d-4c9c-8691-4401e92f924c 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e2:c8:6f,bridge_name='br-int',has_traffic_filtering=True,id=5fe592c8-7d19-4183-bf59-ce0005fa0c0d,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fe592c8-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.165 2 DEBUG os_vif [None req-92a4bdab-831d-4c9c-8691-4401e92f924c 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:c8:6f,bridge_name='br-int',has_traffic_filtering=True,id=5fe592c8-7d19-4183-bf59-ce0005fa0c0d,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fe592c8-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.167 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5fe592c8-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.172 2 INFO os_vif [None req-92a4bdab-831d-4c9c-8691-4401e92f924c 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:c8:6f,bridge_name='br-int',has_traffic_filtering=True,id=5fe592c8-7d19-4183-bf59-ce0005fa0c0d,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fe592c8-7d')#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.173 2 INFO nova.virt.libvirt.driver [None req-92a4bdab-831d-4c9c-8691-4401e92f924c 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Deleting instance files /var/lib/nova/instances/f9754727-88a8-41a9-a7bb-63bb67701c46_del#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.174 2 INFO nova.virt.libvirt.driver [None req-92a4bdab-831d-4c9c-8691-4401e92f924c 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Deletion of /var/lib/nova/instances/f9754727-88a8-41a9-a7bb-63bb67701c46_del complete#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.398 2 INFO nova.compute.manager [None req-92a4bdab-831d-4c9c-8691-4401e92f924c 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Took 0.53 seconds to destroy the instance on the hypervisor.#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.399 2 DEBUG oslo.service.loopingcall [None req-92a4bdab-831d-4c9c-8691-4401e92f924c 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.399 2 DEBUG nova.compute.manager [-] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.400 2 DEBUG nova.network.neutron [-] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.618 2 DEBUG nova.compute.manager [req-f4d72bc5-387c-4700-b21c-5429edf4b636 req-0f303619-2d7f-4293-8dd8-4b85791fe007 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Received event network-vif-unplugged-5fe592c8-7d19-4183-bf59-ce0005fa0c0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.619 2 DEBUG oslo_concurrency.lockutils [req-f4d72bc5-387c-4700-b21c-5429edf4b636 req-0f303619-2d7f-4293-8dd8-4b85791fe007 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "f9754727-88a8-41a9-a7bb-63bb67701c46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.619 2 DEBUG oslo_concurrency.lockutils [req-f4d72bc5-387c-4700-b21c-5429edf4b636 req-0f303619-2d7f-4293-8dd8-4b85791fe007 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "f9754727-88a8-41a9-a7bb-63bb67701c46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.620 2 DEBUG oslo_concurrency.lockutils [req-f4d72bc5-387c-4700-b21c-5429edf4b636 req-0f303619-2d7f-4293-8dd8-4b85791fe007 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "f9754727-88a8-41a9-a7bb-63bb67701c46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.620 2 DEBUG nova.compute.manager [req-f4d72bc5-387c-4700-b21c-5429edf4b636 req-0f303619-2d7f-4293-8dd8-4b85791fe007 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] No waiting events found dispatching network-vif-unplugged-5fe592c8-7d19-4183-bf59-ce0005fa0c0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.620 2 DEBUG nova.compute.manager [req-f4d72bc5-387c-4700-b21c-5429edf4b636 req-0f303619-2d7f-4293-8dd8-4b85791fe007 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Received event network-vif-unplugged-5fe592c8-7d19-4183-bf59-ce0005fa0c0d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.903 2 DEBUG nova.network.neutron [-] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.920 2 INFO nova.compute.manager [-] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Took 0.52 seconds to deallocate network for instance.#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.966 2 DEBUG oslo_concurrency.lockutils [None req-92a4bdab-831d-4c9c-8691-4401e92f924c 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.966 2 DEBUG oslo_concurrency.lockutils [None req-92a4bdab-831d-4c9c-8691-4401e92f924c 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.973 2 DEBUG oslo_concurrency.lockutils [None req-92a4bdab-831d-4c9c-8691-4401e92f924c 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:09:59 np0005485008 nova_compute[192512]: 2025-10-13 16:09:59.979 2 DEBUG nova.compute.manager [req-c9a8d43b-0488-4bf9-9be6-b5ee7596d72a req-08489469-cc6b-4dfa-85b3-1cec8fad584a 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Received event network-vif-deleted-5fe592c8-7d19-4183-bf59-ce0005fa0c0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:10:00 np0005485008 nova_compute[192512]: 2025-10-13 16:10:00.014 2 INFO nova.scheduler.client.report [None req-92a4bdab-831d-4c9c-8691-4401e92f924c 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Deleted allocations for instance f9754727-88a8-41a9-a7bb-63bb67701c46#033[00m
Oct 13 12:10:00 np0005485008 nova_compute[192512]: 2025-10-13 16:10:00.062 2 DEBUG oslo_concurrency.lockutils [None req-92a4bdab-831d-4c9c-8691-4401e92f924c 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "f9754727-88a8-41a9-a7bb-63bb67701c46" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:10:00 np0005485008 nova_compute[192512]: 2025-10-13 16:10:00.865 2 DEBUG oslo_concurrency.lockutils [None req-339fb6a1-54da-4cfa-b434-21f52f7b25d0 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "3712da29-1024-46ae-b142-57fa5083baa0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:10:00 np0005485008 nova_compute[192512]: 2025-10-13 16:10:00.865 2 DEBUG oslo_concurrency.lockutils [None req-339fb6a1-54da-4cfa-b434-21f52f7b25d0 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "3712da29-1024-46ae-b142-57fa5083baa0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:10:00 np0005485008 nova_compute[192512]: 2025-10-13 16:10:00.866 2 DEBUG oslo_concurrency.lockutils [None req-339fb6a1-54da-4cfa-b434-21f52f7b25d0 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "3712da29-1024-46ae-b142-57fa5083baa0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:10:00 np0005485008 nova_compute[192512]: 2025-10-13 16:10:00.866 2 DEBUG oslo_concurrency.lockutils [None req-339fb6a1-54da-4cfa-b434-21f52f7b25d0 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "3712da29-1024-46ae-b142-57fa5083baa0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:10:00 np0005485008 nova_compute[192512]: 2025-10-13 16:10:00.866 2 DEBUG oslo_concurrency.lockutils [None req-339fb6a1-54da-4cfa-b434-21f52f7b25d0 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "3712da29-1024-46ae-b142-57fa5083baa0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:10:00 np0005485008 nova_compute[192512]: 2025-10-13 16:10:00.867 2 INFO nova.compute.manager [None req-339fb6a1-54da-4cfa-b434-21f52f7b25d0 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Terminating instance#033[00m
Oct 13 12:10:00 np0005485008 nova_compute[192512]: 2025-10-13 16:10:00.868 2 DEBUG nova.compute.manager [None req-339fb6a1-54da-4cfa-b434-21f52f7b25d0 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 13 12:10:00 np0005485008 kernel: tapa56ca6e7-1a (unregistering): left promiscuous mode
Oct 13 12:10:00 np0005485008 NetworkManager[51587]: <info>  [1760371800.9015] device (tapa56ca6e7-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 12:10:00 np0005485008 nova_compute[192512]: 2025-10-13 16:10:00.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:00 np0005485008 ovn_controller[94758]: 2025-10-13T16:10:00Z|00271|binding|INFO|Releasing lport a56ca6e7-1a3f-4108-88da-cf00466e652a from this chassis (sb_readonly=0)
Oct 13 12:10:00 np0005485008 ovn_controller[94758]: 2025-10-13T16:10:00Z|00272|binding|INFO|Setting lport a56ca6e7-1a3f-4108-88da-cf00466e652a down in Southbound
Oct 13 12:10:00 np0005485008 ovn_controller[94758]: 2025-10-13T16:10:00Z|00273|binding|INFO|Removing iface tapa56ca6e7-1a ovn-installed in OVS
Oct 13 12:10:00 np0005485008 nova_compute[192512]: 2025-10-13 16:10:00.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:00 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:10:00.917 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:d4:db 10.100.0.10'], port_security=['fa:16:3e:4e:d4:db 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '3712da29-1024-46ae-b142-57fa5083baa0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d9418fd42c841d38cbfc7819a3fca65', 'neutron:revision_number': '13', 'neutron:security_group_ids': 'f960747e-7ab9-48d2-aed6-cfc85edba0a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a38eb17-ee7c-46ea-87c4-7533606e2408, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=a56ca6e7-1a3f-4108-88da-cf00466e652a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:10:00 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:10:00.918 103642 INFO neutron.agent.ovn.metadata.agent [-] Port a56ca6e7-1a3f-4108-88da-cf00466e652a in datapath 39a43da9-cf4c-4fe3-ab73-bf8705320dae unbound from our chassis#033[00m
Oct 13 12:10:00 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:10:00.919 103642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 39a43da9-cf4c-4fe3-ab73-bf8705320dae, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 13 12:10:00 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:10:00.920 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[feb67442-8a6a-40d4-ac64-9ff5ced3e711]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:10:00 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:10:00.921 103642 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae namespace which is not needed anymore#033[00m
Oct 13 12:10:00 np0005485008 nova_compute[192512]: 2025-10-13 16:10:00.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:00 np0005485008 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000017.scope: Deactivated successfully.
Oct 13 12:10:00 np0005485008 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000017.scope: Consumed 1.881s CPU time.
Oct 13 12:10:00 np0005485008 systemd-machined[152551]: Machine qemu-22-instance-00000017 terminated.
Oct 13 12:10:01 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[225158]: [NOTICE]   (225162) : haproxy version is 2.8.14-c23fe91
Oct 13 12:10:01 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[225158]: [NOTICE]   (225162) : path to executable is /usr/sbin/haproxy
Oct 13 12:10:01 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[225158]: [WARNING]  (225162) : Exiting Master process...
Oct 13 12:10:01 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[225158]: [WARNING]  (225162) : Exiting Master process...
Oct 13 12:10:01 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[225158]: [ALERT]    (225162) : Current worker (225164) exited with code 143 (Terminated)
Oct 13 12:10:01 np0005485008 neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae[225158]: [WARNING]  (225162) : All workers exited. Exiting... (0)
Oct 13 12:10:01 np0005485008 systemd[1]: libpod-b0df7c8606391c0baacde6e05fd489a61440058c3deb390416f5eb1b4e203cfa.scope: Deactivated successfully.
Oct 13 12:10:01 np0005485008 NetworkManager[51587]: <info>  [1760371801.0909] manager: (tapa56ca6e7-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/96)
Oct 13 12:10:01 np0005485008 podman[225420]: 2025-10-13 16:10:01.092869148 +0000 UTC m=+0.062102939 container died b0df7c8606391c0baacde6e05fd489a61440058c3deb390416f5eb1b4e203cfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 13 12:10:01 np0005485008 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b0df7c8606391c0baacde6e05fd489a61440058c3deb390416f5eb1b4e203cfa-userdata-shm.mount: Deactivated successfully.
Oct 13 12:10:01 np0005485008 systemd[1]: var-lib-containers-storage-overlay-e38a32304c81dfd82e16d6a816bb5e93e61a7063939cb077a20949c9e0d07ad9-merged.mount: Deactivated successfully.
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.148 2 INFO nova.virt.libvirt.driver [-] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Instance destroyed successfully.#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.150 2 DEBUG nova.objects.instance [None req-339fb6a1-54da-4cfa-b434-21f52f7b25d0 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lazy-loading 'resources' on Instance uuid 3712da29-1024-46ae-b142-57fa5083baa0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.168 2 DEBUG nova.virt.libvirt.vif [None req-339fb6a1-54da-4cfa-b434-21f52f7b25d0 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-13T16:07:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteStrategies-server-612787496',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutestrategies-server-612787496',id=23,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T16:08:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4d9418fd42c841d38cbfc7819a3fca65',ramdisk_id='',reservation_id='r-14ccsl2p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',clean_attempts='1',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteStrategies-1416319229',owner_user_name='tempest-TestExecuteStrategies-1416319229-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T16:09:55Z,user_data=None,user_id='3f85e781b03b405795a2079908bd2792',uuid=3712da29-1024-46ae-b142-57fa5083baa0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a56ca6e7-1a3f-4108-88da-cf00466e652a", "address": "fa:16:3e:4e:d4:db", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa56ca6e7-1a", "ovs_interfaceid": "a56ca6e7-1a3f-4108-88da-cf00466e652a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.169 2 DEBUG nova.network.os_vif_util [None req-339fb6a1-54da-4cfa-b434-21f52f7b25d0 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converting VIF {"id": "a56ca6e7-1a3f-4108-88da-cf00466e652a", "address": "fa:16:3e:4e:d4:db", "network": {"id": "39a43da9-cf4c-4fe3-ab73-bf8705320dae", "bridge": "br-int", "label": "tempest-TestExecuteStrategies-1956893083-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd02120fd4e647eeb8b74a0b2744e7dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa56ca6e7-1a", "ovs_interfaceid": "a56ca6e7-1a3f-4108-88da-cf00466e652a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.169 2 DEBUG nova.network.os_vif_util [None req-339fb6a1-54da-4cfa-b434-21f52f7b25d0 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4e:d4:db,bridge_name='br-int',has_traffic_filtering=True,id=a56ca6e7-1a3f-4108-88da-cf00466e652a,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa56ca6e7-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.171 2 DEBUG os_vif [None req-339fb6a1-54da-4cfa-b434-21f52f7b25d0 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:d4:db,bridge_name='br-int',has_traffic_filtering=True,id=a56ca6e7-1a3f-4108-88da-cf00466e652a,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa56ca6e7-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.172 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa56ca6e7-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.178 2 INFO os_vif [None req-339fb6a1-54da-4cfa-b434-21f52f7b25d0 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4e:d4:db,bridge_name='br-int',has_traffic_filtering=True,id=a56ca6e7-1a3f-4108-88da-cf00466e652a,network=Network(39a43da9-cf4c-4fe3-ab73-bf8705320dae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa56ca6e7-1a')#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.179 2 INFO nova.virt.libvirt.driver [None req-339fb6a1-54da-4cfa-b434-21f52f7b25d0 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Deleting instance files /var/lib/nova/instances/3712da29-1024-46ae-b142-57fa5083baa0_del#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.179 2 INFO nova.virt.libvirt.driver [None req-339fb6a1-54da-4cfa-b434-21f52f7b25d0 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Deletion of /var/lib/nova/instances/3712da29-1024-46ae-b142-57fa5083baa0_del complete#033[00m
Oct 13 12:10:01 np0005485008 podman[225420]: 2025-10-13 16:10:01.180362415 +0000 UTC m=+0.149596216 container cleanup b0df7c8606391c0baacde6e05fd489a61440058c3deb390416f5eb1b4e203cfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 12:10:01 np0005485008 systemd[1]: libpod-conmon-b0df7c8606391c0baacde6e05fd489a61440058c3deb390416f5eb1b4e203cfa.scope: Deactivated successfully.
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.230 2 INFO nova.compute.manager [None req-339fb6a1-54da-4cfa-b434-21f52f7b25d0 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Took 0.36 seconds to destroy the instance on the hypervisor.#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.231 2 DEBUG oslo.service.loopingcall [None req-339fb6a1-54da-4cfa-b434-21f52f7b25d0 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.231 2 DEBUG nova.compute.manager [-] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.232 2 DEBUG nova.network.neutron [-] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 13 12:10:01 np0005485008 podman[225468]: 2025-10-13 16:10:01.335456306 +0000 UTC m=+0.125597543 container remove b0df7c8606391c0baacde6e05fd489a61440058c3deb390416f5eb1b4e203cfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 12:10:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:10:01.342 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[44a14c31-50f9-43d6-93ed-b9ec19e94864]: (4, ('Mon Oct 13 04:10:01 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae (b0df7c8606391c0baacde6e05fd489a61440058c3deb390416f5eb1b4e203cfa)\nb0df7c8606391c0baacde6e05fd489a61440058c3deb390416f5eb1b4e203cfa\nMon Oct 13 04:10:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae (b0df7c8606391c0baacde6e05fd489a61440058c3deb390416f5eb1b4e203cfa)\nb0df7c8606391c0baacde6e05fd489a61440058c3deb390416f5eb1b4e203cfa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:10:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:10:01.344 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[597d853f-7043-477f-9e06-e0a62123f4af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:10:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:10:01.345 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39a43da9-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:10:01 np0005485008 kernel: tap39a43da9-c0: left promiscuous mode
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:10:01.354 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[8d96a154-0d21-4204-a51f-1071745d29e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:10:01.386 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[1852cf03-a1b7-49aa-8ad8-4d94552eb3e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:10:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:10:01.388 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[301bb0a1-e5a7-4b41-9e85-9bb8e4468e7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:10:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:10:01.412 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[32b7f9a5-901a-42bd-a6af-e82e3312f535]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531166, 'reachable_time': 21495, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 225480, 'error': None, 'target': 'ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:10:01 np0005485008 systemd[1]: run-netns-ovnmeta\x2d39a43da9\x2dcf4c\x2d4fe3\x2dab73\x2dbf8705320dae.mount: Deactivated successfully.
Oct 13 12:10:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:10:01.417 103757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-39a43da9-cf4c-4fe3-ab73-bf8705320dae deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 13 12:10:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:10:01.417 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[5641d7d5-ea10-4883-9c82-284add22ca95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.718 2 DEBUG nova.compute.manager [req-8ccb03fd-8bb0-4c3c-a549-de6c5863e8b4 req-f0284728-3725-4f60-834e-7a410c3295ea 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Received event network-vif-plugged-5fe592c8-7d19-4183-bf59-ce0005fa0c0d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.719 2 DEBUG oslo_concurrency.lockutils [req-8ccb03fd-8bb0-4c3c-a549-de6c5863e8b4 req-f0284728-3725-4f60-834e-7a410c3295ea 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "f9754727-88a8-41a9-a7bb-63bb67701c46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.719 2 DEBUG oslo_concurrency.lockutils [req-8ccb03fd-8bb0-4c3c-a549-de6c5863e8b4 req-f0284728-3725-4f60-834e-7a410c3295ea 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "f9754727-88a8-41a9-a7bb-63bb67701c46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.720 2 DEBUG oslo_concurrency.lockutils [req-8ccb03fd-8bb0-4c3c-a549-de6c5863e8b4 req-f0284728-3725-4f60-834e-7a410c3295ea 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "f9754727-88a8-41a9-a7bb-63bb67701c46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.720 2 DEBUG nova.compute.manager [req-8ccb03fd-8bb0-4c3c-a549-de6c5863e8b4 req-f0284728-3725-4f60-834e-7a410c3295ea 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] No waiting events found dispatching network-vif-plugged-5fe592c8-7d19-4183-bf59-ce0005fa0c0d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.721 2 WARNING nova.compute.manager [req-8ccb03fd-8bb0-4c3c-a549-de6c5863e8b4 req-f0284728-3725-4f60-834e-7a410c3295ea 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Received unexpected event network-vif-plugged-5fe592c8-7d19-4183-bf59-ce0005fa0c0d for instance with vm_state deleted and task_state None.#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.721 2 DEBUG nova.compute.manager [req-8ccb03fd-8bb0-4c3c-a549-de6c5863e8b4 req-f0284728-3725-4f60-834e-7a410c3295ea 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Received event network-vif-unplugged-a56ca6e7-1a3f-4108-88da-cf00466e652a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.722 2 DEBUG oslo_concurrency.lockutils [req-8ccb03fd-8bb0-4c3c-a549-de6c5863e8b4 req-f0284728-3725-4f60-834e-7a410c3295ea 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "3712da29-1024-46ae-b142-57fa5083baa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.722 2 DEBUG oslo_concurrency.lockutils [req-8ccb03fd-8bb0-4c3c-a549-de6c5863e8b4 req-f0284728-3725-4f60-834e-7a410c3295ea 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3712da29-1024-46ae-b142-57fa5083baa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.723 2 DEBUG oslo_concurrency.lockutils [req-8ccb03fd-8bb0-4c3c-a549-de6c5863e8b4 req-f0284728-3725-4f60-834e-7a410c3295ea 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3712da29-1024-46ae-b142-57fa5083baa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.724 2 DEBUG nova.compute.manager [req-8ccb03fd-8bb0-4c3c-a549-de6c5863e8b4 req-f0284728-3725-4f60-834e-7a410c3295ea 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] No waiting events found dispatching network-vif-unplugged-a56ca6e7-1a3f-4108-88da-cf00466e652a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.724 2 DEBUG nova.compute.manager [req-8ccb03fd-8bb0-4c3c-a549-de6c5863e8b4 req-f0284728-3725-4f60-834e-7a410c3295ea 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Received event network-vif-unplugged-a56ca6e7-1a3f-4108-88da-cf00466e652a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.725 2 DEBUG nova.compute.manager [req-8ccb03fd-8bb0-4c3c-a549-de6c5863e8b4 req-f0284728-3725-4f60-834e-7a410c3295ea 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Received event network-vif-plugged-a56ca6e7-1a3f-4108-88da-cf00466e652a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.725 2 DEBUG oslo_concurrency.lockutils [req-8ccb03fd-8bb0-4c3c-a549-de6c5863e8b4 req-f0284728-3725-4f60-834e-7a410c3295ea 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "3712da29-1024-46ae-b142-57fa5083baa0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.726 2 DEBUG oslo_concurrency.lockutils [req-8ccb03fd-8bb0-4c3c-a549-de6c5863e8b4 req-f0284728-3725-4f60-834e-7a410c3295ea 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3712da29-1024-46ae-b142-57fa5083baa0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.726 2 DEBUG oslo_concurrency.lockutils [req-8ccb03fd-8bb0-4c3c-a549-de6c5863e8b4 req-f0284728-3725-4f60-834e-7a410c3295ea 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "3712da29-1024-46ae-b142-57fa5083baa0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.726 2 DEBUG nova.compute.manager [req-8ccb03fd-8bb0-4c3c-a549-de6c5863e8b4 req-f0284728-3725-4f60-834e-7a410c3295ea 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] No waiting events found dispatching network-vif-plugged-a56ca6e7-1a3f-4108-88da-cf00466e652a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.727 2 WARNING nova.compute.manager [req-8ccb03fd-8bb0-4c3c-a549-de6c5863e8b4 req-f0284728-3725-4f60-834e-7a410c3295ea 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Received unexpected event network-vif-plugged-a56ca6e7-1a3f-4108-88da-cf00466e652a for instance with vm_state active and task_state deleting.#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.764 2 DEBUG nova.network.neutron [-] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.786 2 INFO nova.compute.manager [-] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Took 0.55 seconds to deallocate network for instance.#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.836 2 DEBUG oslo_concurrency.lockutils [None req-339fb6a1-54da-4cfa-b434-21f52f7b25d0 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.837 2 DEBUG oslo_concurrency.lockutils [None req-339fb6a1-54da-4cfa-b434-21f52f7b25d0 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.844 2 DEBUG oslo_concurrency.lockutils [None req-339fb6a1-54da-4cfa-b434-21f52f7b25d0 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.872 2 INFO nova.scheduler.client.report [None req-339fb6a1-54da-4cfa-b434-21f52f7b25d0 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Deleted allocations for instance 3712da29-1024-46ae-b142-57fa5083baa0#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:01 np0005485008 nova_compute[192512]: 2025-10-13 16:10:01.933 2 DEBUG oslo_concurrency.lockutils [None req-339fb6a1-54da-4cfa-b434-21f52f7b25d0 3f85e781b03b405795a2079908bd2792 4d9418fd42c841d38cbfc7819a3fca65 - - default default] Lock "3712da29-1024-46ae-b142-57fa5083baa0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:10:02 np0005485008 nova_compute[192512]: 2025-10-13 16:10:02.053 2 DEBUG nova.compute.manager [req-c5dcbd23-5322-4ee6-a896-c8426b60f4e5 req-2ab9e2a2-6e06-4cd8-a980-c3f3ae3f6537 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Received event network-vif-deleted-a56ca6e7-1a3f-4108-88da-cf00466e652a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:10:05 np0005485008 podman[202884]: time="2025-10-13T16:10:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:10:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:10:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:10:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:10:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3010 "" "Go-http-client/1.1"
Oct 13 12:10:06 np0005485008 nova_compute[192512]: 2025-10-13 16:10:06.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:06 np0005485008 nova_compute[192512]: 2025-10-13 16:10:06.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:11 np0005485008 nova_compute[192512]: 2025-10-13 16:10:11.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:11 np0005485008 nova_compute[192512]: 2025-10-13 16:10:11.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:14 np0005485008 nova_compute[192512]: 2025-10-13 16:10:14.145 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760371799.1440494, f9754727-88a8-41a9-a7bb-63bb67701c46 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:10:14 np0005485008 nova_compute[192512]: 2025-10-13 16:10:14.146 2 INFO nova.compute.manager [-] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] VM Stopped (Lifecycle Event)#033[00m
Oct 13 12:10:14 np0005485008 nova_compute[192512]: 2025-10-13 16:10:14.166 2 DEBUG nova.compute.manager [None req-4b66dbaf-ab80-49bb-b12e-0124fdaec5ad - - - - - -] [instance: f9754727-88a8-41a9-a7bb-63bb67701c46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:10:14 np0005485008 podman[225489]: 2025-10-13 16:10:14.79460405 +0000 UTC m=+0.065331303 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 12:10:14 np0005485008 podman[225481]: 2025-10-13 16:10:14.79462893 +0000 UTC m=+0.083326134 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251009, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 12:10:14 np0005485008 podman[225482]: 2025-10-13 16:10:14.798061 +0000 UTC m=+0.082026434 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=iscsid)
Oct 13 12:10:14 np0005485008 podman[225483]: 2025-10-13 16:10:14.799427123 +0000 UTC m=+0.078139230 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 12:10:14 np0005485008 podman[225495]: 2025-10-13 16:10:14.845079507 +0000 UTC m=+0.106927237 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 12:10:16 np0005485008 nova_compute[192512]: 2025-10-13 16:10:16.145 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760371801.1443248, 3712da29-1024-46ae-b142-57fa5083baa0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:10:16 np0005485008 nova_compute[192512]: 2025-10-13 16:10:16.146 2 INFO nova.compute.manager [-] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] VM Stopped (Lifecycle Event)#033[00m
Oct 13 12:10:16 np0005485008 nova_compute[192512]: 2025-10-13 16:10:16.163 2 DEBUG nova.compute.manager [None req-c203821d-8ade-40ff-a6d9-d08a9941d900 - - - - - -] [instance: 3712da29-1024-46ae-b142-57fa5083baa0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:10:16 np0005485008 nova_compute[192512]: 2025-10-13 16:10:16.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:16 np0005485008 nova_compute[192512]: 2025-10-13 16:10:16.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:10:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:10:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:10:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:10:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:10:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:10:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:10:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:10:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:10:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:10:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:10:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:10:21 np0005485008 nova_compute[192512]: 2025-10-13 16:10:21.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:21 np0005485008 nova_compute[192512]: 2025-10-13 16:10:21.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:10:21 np0005485008 nova_compute[192512]: 2025-10-13 16:10:21.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:10:21 np0005485008 nova_compute[192512]: 2025-10-13 16:10:21.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:24 np0005485008 nova_compute[192512]: 2025-10-13 16:10:24.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:10:26 np0005485008 nova_compute[192512]: 2025-10-13 16:10:26.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:26 np0005485008 nova_compute[192512]: 2025-10-13 16:10:26.422 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:10:26 np0005485008 nova_compute[192512]: 2025-10-13 16:10:26.426 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:10:26 np0005485008 nova_compute[192512]: 2025-10-13 16:10:26.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:27 np0005485008 nova_compute[192512]: 2025-10-13 16:10:27.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:10:28 np0005485008 podman[225585]: 2025-10-13 16:10:28.768566848 +0000 UTC m=+0.068555344 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, distribution-scope=public, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, architecture=x86_64, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc.)
Oct 13 12:10:29 np0005485008 nova_compute[192512]: 2025-10-13 16:10:29.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:10:29 np0005485008 nova_compute[192512]: 2025-10-13 16:10:29.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:10:29 np0005485008 nova_compute[192512]: 2025-10-13 16:10:29.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:10:29 np0005485008 nova_compute[192512]: 2025-10-13 16:10:29.447 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:10:31 np0005485008 nova_compute[192512]: 2025-10-13 16:10:31.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:31 np0005485008 nova_compute[192512]: 2025-10-13 16:10:31.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:10:31 np0005485008 nova_compute[192512]: 2025-10-13 16:10:31.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:10:31 np0005485008 nova_compute[192512]: 2025-10-13 16:10:31.456 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:10:31 np0005485008 nova_compute[192512]: 2025-10-13 16:10:31.456 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:10:31 np0005485008 nova_compute[192512]: 2025-10-13 16:10:31.456 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:10:31 np0005485008 nova_compute[192512]: 2025-10-13 16:10:31.456 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:10:31 np0005485008 nova_compute[192512]: 2025-10-13 16:10:31.628 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:10:31 np0005485008 nova_compute[192512]: 2025-10-13 16:10:31.630 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5854MB free_disk=73.46339416503906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:10:31 np0005485008 nova_compute[192512]: 2025-10-13 16:10:31.630 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:10:31 np0005485008 nova_compute[192512]: 2025-10-13 16:10:31.630 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:10:31 np0005485008 nova_compute[192512]: 2025-10-13 16:10:31.691 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:10:31 np0005485008 nova_compute[192512]: 2025-10-13 16:10:31.691 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:10:31 np0005485008 nova_compute[192512]: 2025-10-13 16:10:31.713 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:10:31 np0005485008 ovn_controller[94758]: 2025-10-13T16:10:31Z|00274|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 13 12:10:31 np0005485008 nova_compute[192512]: 2025-10-13 16:10:31.730 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:10:31 np0005485008 nova_compute[192512]: 2025-10-13 16:10:31.750 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:10:31 np0005485008 nova_compute[192512]: 2025-10-13 16:10:31.751 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:10:31 np0005485008 nova_compute[192512]: 2025-10-13 16:10:31.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:32 np0005485008 nova_compute[192512]: 2025-10-13 16:10:32.752 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:10:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:10:33.976 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:10:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:10:33.977 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:10:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:10:33.977 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:10:35 np0005485008 podman[202884]: time="2025-10-13T16:10:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:10:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:10:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:10:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:10:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3010 "" "Go-http-client/1.1"
Oct 13 12:10:36 np0005485008 nova_compute[192512]: 2025-10-13 16:10:36.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:36 np0005485008 nova_compute[192512]: 2025-10-13 16:10:36.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:10:36 np0005485008 nova_compute[192512]: 2025-10-13 16:10:36.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:41 np0005485008 nova_compute[192512]: 2025-10-13 16:10:41.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:42 np0005485008 nova_compute[192512]: 2025-10-13 16:10:42.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:42 np0005485008 nova_compute[192512]: 2025-10-13 16:10:42.443 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:10:42 np0005485008 nova_compute[192512]: 2025-10-13 16:10:42.443 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 13 12:10:42 np0005485008 nova_compute[192512]: 2025-10-13 16:10:42.465 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 13 12:10:45 np0005485008 podman[225609]: 2025-10-13 16:10:45.768033741 +0000 UTC m=+0.058653738 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, io.buildah.version=1.41.3, tcib_managed=true)
Oct 13 12:10:45 np0005485008 podman[225608]: 2025-10-13 16:10:45.814955497 +0000 UTC m=+0.099786120 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd)
Oct 13 12:10:45 np0005485008 podman[225610]: 2025-10-13 16:10:45.817865449 +0000 UTC m=+0.103024292 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 13 12:10:45 np0005485008 podman[225611]: 2025-10-13 16:10:45.835646085 +0000 UTC m=+0.108469695 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 12:10:45 np0005485008 podman[225612]: 2025-10-13 16:10:45.854361631 +0000 UTC m=+0.124173905 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 12:10:46 np0005485008 nova_compute[192512]: 2025-10-13 16:10:46.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:47 np0005485008 nova_compute[192512]: 2025-10-13 16:10:47.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:48 np0005485008 nova_compute[192512]: 2025-10-13 16:10:48.442 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:10:48 np0005485008 nova_compute[192512]: 2025-10-13 16:10:48.442 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 13 12:10:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:10:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:10:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:10:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:10:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:10:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:10:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:10:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:10:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:10:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:10:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:10:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:10:51 np0005485008 nova_compute[192512]: 2025-10-13 16:10:51.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:51 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:10:51.902 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:10:51 np0005485008 nova_compute[192512]: 2025-10-13 16:10:51.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:51 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:10:51.904 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 12:10:51 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:10:51.904 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:10:52 np0005485008 nova_compute[192512]: 2025-10-13 16:10:52.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:53 np0005485008 nova_compute[192512]: 2025-10-13 16:10:53.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:56 np0005485008 nova_compute[192512]: 2025-10-13 16:10:56.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:57 np0005485008 nova_compute[192512]: 2025-10-13 16:10:57.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:10:59 np0005485008 podman[225711]: 2025-10-13 16:10:59.778643168 +0000 UTC m=+0.074035939 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 13 12:11:01 np0005485008 nova_compute[192512]: 2025-10-13 16:11:01.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:02 np0005485008 nova_compute[192512]: 2025-10-13 16:11:02.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:05 np0005485008 podman[202884]: time="2025-10-13T16:11:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:11:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:11:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:11:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:11:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3008 "" "Go-http-client/1.1"
Oct 13 12:11:06 np0005485008 nova_compute[192512]: 2025-10-13 16:11:06.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:07 np0005485008 nova_compute[192512]: 2025-10-13 16:11:07.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:11 np0005485008 nova_compute[192512]: 2025-10-13 16:11:11.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:12 np0005485008 nova_compute[192512]: 2025-10-13 16:11:12.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:16 np0005485008 nova_compute[192512]: 2025-10-13 16:11:16.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:16 np0005485008 podman[225737]: 2025-10-13 16:11:16.785002558 +0000 UTC m=+0.070972712 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 12:11:16 np0005485008 podman[225734]: 2025-10-13 16:11:16.792876409 +0000 UTC m=+0.092208098 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 13 12:11:16 np0005485008 podman[225735]: 2025-10-13 16:11:16.798387354 +0000 UTC m=+0.091141253 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3)
Oct 13 12:11:16 np0005485008 podman[225736]: 2025-10-13 16:11:16.805961676 +0000 UTC m=+0.094915784 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 13 12:11:16 np0005485008 podman[225743]: 2025-10-13 16:11:16.821677726 +0000 UTC m=+0.103093004 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 13 12:11:17 np0005485008 nova_compute[192512]: 2025-10-13 16:11:17.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:11:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:11:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:11:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:11:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:11:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:11:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:11:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:11:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:11:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:11:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:11:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:11:21 np0005485008 nova_compute[192512]: 2025-10-13 16:11:21.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:22 np0005485008 nova_compute[192512]: 2025-10-13 16:11:22.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:23 np0005485008 nova_compute[192512]: 2025-10-13 16:11:23.443 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:11:23 np0005485008 nova_compute[192512]: 2025-10-13 16:11:23.443 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:11:24 np0005485008 nova_compute[192512]: 2025-10-13 16:11:24.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:11:26 np0005485008 nova_compute[192512]: 2025-10-13 16:11:26.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:26 np0005485008 nova_compute[192512]: 2025-10-13 16:11:26.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:11:27 np0005485008 nova_compute[192512]: 2025-10-13 16:11:27.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:27 np0005485008 nova_compute[192512]: 2025-10-13 16:11:27.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:11:28 np0005485008 nova_compute[192512]: 2025-10-13 16:11:28.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:11:30 np0005485008 nova_compute[192512]: 2025-10-13 16:11:30.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:11:30 np0005485008 nova_compute[192512]: 2025-10-13 16:11:30.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:11:30 np0005485008 nova_compute[192512]: 2025-10-13 16:11:30.430 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:11:30 np0005485008 nova_compute[192512]: 2025-10-13 16:11:30.445 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:11:30 np0005485008 podman[225833]: 2025-10-13 16:11:30.763867893 +0000 UTC m=+0.066352104 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, version=9.6, distribution-scope=public, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, io.buildah.version=1.33.7, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 13 12:11:31 np0005485008 nova_compute[192512]: 2025-10-13 16:11:31.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:32 np0005485008 nova_compute[192512]: 2025-10-13 16:11:32.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:32 np0005485008 nova_compute[192512]: 2025-10-13 16:11:32.426 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:11:32 np0005485008 ovn_controller[94758]: 2025-10-13T16:11:32Z|00275|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Oct 13 12:11:33 np0005485008 nova_compute[192512]: 2025-10-13 16:11:33.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:11:33 np0005485008 nova_compute[192512]: 2025-10-13 16:11:33.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:11:33 np0005485008 nova_compute[192512]: 2025-10-13 16:11:33.452 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:11:33 np0005485008 nova_compute[192512]: 2025-10-13 16:11:33.453 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:11:33 np0005485008 nova_compute[192512]: 2025-10-13 16:11:33.453 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:11:33 np0005485008 nova_compute[192512]: 2025-10-13 16:11:33.454 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:11:33 np0005485008 nova_compute[192512]: 2025-10-13 16:11:33.638 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:11:33 np0005485008 nova_compute[192512]: 2025-10-13 16:11:33.639 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5861MB free_disk=73.46341323852539GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:11:33 np0005485008 nova_compute[192512]: 2025-10-13 16:11:33.640 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:11:33 np0005485008 nova_compute[192512]: 2025-10-13 16:11:33.640 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:11:33 np0005485008 nova_compute[192512]: 2025-10-13 16:11:33.739 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:11:33 np0005485008 nova_compute[192512]: 2025-10-13 16:11:33.740 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:11:33 np0005485008 nova_compute[192512]: 2025-10-13 16:11:33.803 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:11:33 np0005485008 nova_compute[192512]: 2025-10-13 16:11:33.816 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:11:33 np0005485008 nova_compute[192512]: 2025-10-13 16:11:33.818 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:11:33 np0005485008 nova_compute[192512]: 2025-10-13 16:11:33.819 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:11:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:33.977 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:11:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:33.978 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:11:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:33.978 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:11:35 np0005485008 podman[202884]: time="2025-10-13T16:11:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:11:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:11:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:11:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:11:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3011 "" "Go-http-client/1.1"
Oct 13 12:11:36 np0005485008 nova_compute[192512]: 2025-10-13 16:11:36.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:37 np0005485008 nova_compute[192512]: 2025-10-13 16:11:37.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:39 np0005485008 nova_compute[192512]: 2025-10-13 16:11:39.815 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:11:41 np0005485008 nova_compute[192512]: 2025-10-13 16:11:41.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:42 np0005485008 nova_compute[192512]: 2025-10-13 16:11:42.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:46 np0005485008 nova_compute[192512]: 2025-10-13 16:11:46.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:47 np0005485008 nova_compute[192512]: 2025-10-13 16:11:47.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:47 np0005485008 podman[225856]: 2025-10-13 16:11:47.779581903 +0000 UTC m=+0.068974326 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 12:11:47 np0005485008 podman[225857]: 2025-10-13 16:11:47.782654881 +0000 UTC m=+0.075109152 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Oct 13 12:11:47 np0005485008 podman[225858]: 2025-10-13 16:11:47.787309779 +0000 UTC m=+0.076425555 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 13 12:11:47 np0005485008 podman[225859]: 2025-10-13 16:11:47.788502707 +0000 UTC m=+0.067897753 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 12:11:47 np0005485008 podman[225860]: 2025-10-13 16:11:47.820715062 +0000 UTC m=+0.102162933 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 12:11:48 np0005485008 nova_compute[192512]: 2025-10-13 16:11:48.567 2 DEBUG oslo_concurrency.lockutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Acquiring lock "2af0747d-d588-49b0-acfd-16748a1e4153" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:11:48 np0005485008 nova_compute[192512]: 2025-10-13 16:11:48.568 2 DEBUG oslo_concurrency.lockutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:11:48 np0005485008 nova_compute[192512]: 2025-10-13 16:11:48.596 2 DEBUG nova.compute.manager [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 13 12:11:48 np0005485008 nova_compute[192512]: 2025-10-13 16:11:48.711 2 DEBUG oslo_concurrency.lockutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:11:48 np0005485008 nova_compute[192512]: 2025-10-13 16:11:48.712 2 DEBUG oslo_concurrency.lockutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:11:48 np0005485008 nova_compute[192512]: 2025-10-13 16:11:48.721 2 DEBUG nova.virt.hardware [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 13 12:11:48 np0005485008 nova_compute[192512]: 2025-10-13 16:11:48.722 2 INFO nova.compute.claims [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct 13 12:11:48 np0005485008 nova_compute[192512]: 2025-10-13 16:11:48.838 2 DEBUG nova.compute.provider_tree [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:11:48 np0005485008 nova_compute[192512]: 2025-10-13 16:11:48.858 2 DEBUG nova.scheduler.client.report [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:11:48 np0005485008 nova_compute[192512]: 2025-10-13 16:11:48.881 2 DEBUG oslo_concurrency.lockutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:11:48 np0005485008 nova_compute[192512]: 2025-10-13 16:11:48.882 2 DEBUG nova.compute.manager [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 13 12:11:48 np0005485008 nova_compute[192512]: 2025-10-13 16:11:48.934 2 DEBUG nova.compute.manager [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 13 12:11:48 np0005485008 nova_compute[192512]: 2025-10-13 16:11:48.935 2 DEBUG nova.network.neutron [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 13 12:11:48 np0005485008 nova_compute[192512]: 2025-10-13 16:11:48.958 2 INFO nova.virt.libvirt.driver [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 13 12:11:48 np0005485008 nova_compute[192512]: 2025-10-13 16:11:48.977 2 DEBUG nova.compute.manager [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.277 2 DEBUG nova.compute.manager [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.280 2 DEBUG nova.virt.libvirt.driver [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.281 2 INFO nova.virt.libvirt.driver [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Creating image(s)#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.282 2 DEBUG oslo_concurrency.lockutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Acquiring lock "/var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.286 2 DEBUG oslo_concurrency.lockutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Lock "/var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.288 2 DEBUG oslo_concurrency.lockutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Lock "/var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.314 2 DEBUG oslo_concurrency.processutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.396 2 DEBUG oslo_concurrency.processutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.397 2 DEBUG oslo_concurrency.lockutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.398 2 DEBUG oslo_concurrency.lockutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.417 2 DEBUG oslo_concurrency.processutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:11:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:11:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:11:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:11:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:11:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:11:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:11:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:11:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:11:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:11:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:11:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:11:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.483 2 DEBUG oslo_concurrency.processutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.485 2 DEBUG oslo_concurrency.processutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.531 2 DEBUG oslo_concurrency.processutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk 1073741824" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.532 2 DEBUG oslo_concurrency.lockutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.532 2 DEBUG oslo_concurrency.processutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.593 2 DEBUG oslo_concurrency.processutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.594 2 DEBUG nova.virt.disk.api [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Checking if we can resize image /var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.594 2 DEBUG oslo_concurrency.processutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.652 2 DEBUG oslo_concurrency.processutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.653 2 DEBUG nova.virt.disk.api [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Cannot resize image /var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.653 2 DEBUG nova.objects.instance [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Lazy-loading 'migration_context' on Instance uuid 2af0747d-d588-49b0-acfd-16748a1e4153 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.679 2 DEBUG nova.virt.libvirt.driver [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.679 2 DEBUG nova.virt.libvirt.driver [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Ensure instance console log exists: /var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.680 2 DEBUG oslo_concurrency.lockutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.680 2 DEBUG oslo_concurrency.lockutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.680 2 DEBUG oslo_concurrency.lockutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:11:49 np0005485008 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 13 12:11:49 np0005485008 nova_compute[192512]: 2025-10-13 16:11:49.882 2 DEBUG nova.network.neutron [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Successfully created port: 5cb75e75-7188-4375-a206-eecc6c7b7ba1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 13 12:11:50 np0005485008 nova_compute[192512]: 2025-10-13 16:11:50.778 2 DEBUG nova.network.neutron [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Successfully updated port: 5cb75e75-7188-4375-a206-eecc6c7b7ba1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 13 12:11:50 np0005485008 nova_compute[192512]: 2025-10-13 16:11:50.795 2 DEBUG oslo_concurrency.lockutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Acquiring lock "refresh_cache-2af0747d-d588-49b0-acfd-16748a1e4153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:11:50 np0005485008 nova_compute[192512]: 2025-10-13 16:11:50.795 2 DEBUG oslo_concurrency.lockutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Acquired lock "refresh_cache-2af0747d-d588-49b0-acfd-16748a1e4153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:11:50 np0005485008 nova_compute[192512]: 2025-10-13 16:11:50.796 2 DEBUG nova.network.neutron [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 12:11:50 np0005485008 nova_compute[192512]: 2025-10-13 16:11:50.886 2 DEBUG nova.compute.manager [req-6d11c269-f141-4618-a187-4c2902e55d15 req-0c467f46-d33b-41b3-8148-79afc4abd237 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Received event network-changed-5cb75e75-7188-4375-a206-eecc6c7b7ba1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:11:50 np0005485008 nova_compute[192512]: 2025-10-13 16:11:50.887 2 DEBUG nova.compute.manager [req-6d11c269-f141-4618-a187-4c2902e55d15 req-0c467f46-d33b-41b3-8148-79afc4abd237 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Refreshing instance network info cache due to event network-changed-5cb75e75-7188-4375-a206-eecc6c7b7ba1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 13 12:11:50 np0005485008 nova_compute[192512]: 2025-10-13 16:11:50.887 2 DEBUG oslo_concurrency.lockutils [req-6d11c269-f141-4618-a187-4c2902e55d15 req-0c467f46-d33b-41b3-8148-79afc4abd237 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-2af0747d-d588-49b0-acfd-16748a1e4153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:11:50 np0005485008 nova_compute[192512]: 2025-10-13 16:11:50.960 2 DEBUG nova.network.neutron [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.796 2 DEBUG nova.network.neutron [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Updating instance_info_cache with network_info: [{"id": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "address": "fa:16:3e:9c:46:33", "network": {"id": "2da7d8dc-53a9-4df2-a3fc-227bd054dd7d", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-122318613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f23dde321dee4009a8bb63ceb5de355e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cb75e75-71", "ovs_interfaceid": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.826 2 DEBUG oslo_concurrency.lockutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Releasing lock "refresh_cache-2af0747d-d588-49b0-acfd-16748a1e4153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.827 2 DEBUG nova.compute.manager [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Instance network_info: |[{"id": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "address": "fa:16:3e:9c:46:33", "network": {"id": "2da7d8dc-53a9-4df2-a3fc-227bd054dd7d", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-122318613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f23dde321dee4009a8bb63ceb5de355e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cb75e75-71", "ovs_interfaceid": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.828 2 DEBUG oslo_concurrency.lockutils [req-6d11c269-f141-4618-a187-4c2902e55d15 req-0c467f46-d33b-41b3-8148-79afc4abd237 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-2af0747d-d588-49b0-acfd-16748a1e4153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.829 2 DEBUG nova.network.neutron [req-6d11c269-f141-4618-a187-4c2902e55d15 req-0c467f46-d33b-41b3-8148-79afc4abd237 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Refreshing network info cache for port 5cb75e75-7188-4375-a206-eecc6c7b7ba1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.834 2 DEBUG nova.virt.libvirt.driver [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Start _get_guest_xml network_info=[{"id": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "address": "fa:16:3e:9c:46:33", "network": {"id": "2da7d8dc-53a9-4df2-a3fc-227bd054dd7d", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-122318613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f23dde321dee4009a8bb63ceb5de355e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cb75e75-71", "ovs_interfaceid": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'boot_index': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': 'dcd9fbd3-16ab-46e1-976e-0576b433c9d5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.842 2 WARNING nova.virt.libvirt.driver [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.850 2 DEBUG nova.virt.libvirt.host [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.852 2 DEBUG nova.virt.libvirt.host [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.862 2 DEBUG nova.virt.libvirt.host [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.870 2 DEBUG nova.virt.libvirt.host [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.877 2 DEBUG nova.virt.libvirt.driver [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.878 2 DEBUG nova.virt.hardware [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-13T15:39:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ddff5c88-cac9-460e-8ffb-1e9b9a7c2a59',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.880 2 DEBUG nova.virt.hardware [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.881 2 DEBUG nova.virt.hardware [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.881 2 DEBUG nova.virt.hardware [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.882 2 DEBUG nova.virt.hardware [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.882 2 DEBUG nova.virt.hardware [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.883 2 DEBUG nova.virt.hardware [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.883 2 DEBUG nova.virt.hardware [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.884 2 DEBUG nova.virt.hardware [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.885 2 DEBUG nova.virt.hardware [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.885 2 DEBUG nova.virt.hardware [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.893 2 DEBUG nova.virt.libvirt.vif [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T16:11:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteVmWorkloadBalanceStrategy-server-611168433',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutevmworkloadbalancestrategy-server-611168433',id=26,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='83c033c913624d5dba20fa5669b0e083',ramdisk_id='',reservation_id='r-cuq71qen',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteVmWorkloadBalanceStrategy-1762058278',owner_user_name='tempest-TestExecuteVmWorkloadBalanceStrategy-1762058278-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T16:11:49Z,user_data=None,user_id='522b0644eef543dc8e89708621200039',uuid=2af0747d-d588-49b0-acfd-16748a1e4153,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "address": "fa:16:3e:9c:46:33", "network": {"id": "2da7d8dc-53a9-4df2-a3fc-227bd054dd7d", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-122318613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f23dde321dee4009a8bb63ceb5de355e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cb75e75-71", "ovs_interfaceid": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.894 2 DEBUG nova.network.os_vif_util [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Converting VIF {"id": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "address": "fa:16:3e:9c:46:33", "network": {"id": "2da7d8dc-53a9-4df2-a3fc-227bd054dd7d", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-122318613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f23dde321dee4009a8bb63ceb5de355e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cb75e75-71", "ovs_interfaceid": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.895 2 DEBUG nova.network.os_vif_util [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:46:33,bridge_name='br-int',has_traffic_filtering=True,id=5cb75e75-7188-4375-a206-eecc6c7b7ba1,network=Network(2da7d8dc-53a9-4df2-a3fc-227bd054dd7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cb75e75-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.896 2 DEBUG nova.objects.instance [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2af0747d-d588-49b0-acfd-16748a1e4153 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.929 2 DEBUG nova.virt.libvirt.driver [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] End _get_guest_xml xml=<domain type="kvm">
Oct 13 12:11:51 np0005485008 nova_compute[192512]:  <uuid>2af0747d-d588-49b0-acfd-16748a1e4153</uuid>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:  <name>instance-0000001a</name>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:  <memory>131072</memory>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:  <vcpu>1</vcpu>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:  <metadata>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <nova:name>tempest-TestExecuteVmWorkloadBalanceStrategy-server-611168433</nova:name>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <nova:creationTime>2025-10-13 16:11:51</nova:creationTime>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <nova:flavor name="m1.nano">
Oct 13 12:11:51 np0005485008 nova_compute[192512]:        <nova:memory>128</nova:memory>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:        <nova:disk>1</nova:disk>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:        <nova:swap>0</nova:swap>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:        <nova:ephemeral>0</nova:ephemeral>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:        <nova:vcpus>1</nova:vcpus>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      </nova:flavor>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <nova:owner>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:        <nova:user uuid="522b0644eef543dc8e89708621200039">tempest-TestExecuteVmWorkloadBalanceStrategy-1762058278-project-admin</nova:user>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:        <nova:project uuid="83c033c913624d5dba20fa5669b0e083">tempest-TestExecuteVmWorkloadBalanceStrategy-1762058278</nova:project>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      </nova:owner>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <nova:root type="image" uuid="dcd9fbd3-16ab-46e1-976e-0576b433c9d5"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <nova:ports>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:        <nova:port uuid="5cb75e75-7188-4375-a206-eecc6c7b7ba1">
Oct 13 12:11:51 np0005485008 nova_compute[192512]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:        </nova:port>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      </nova:ports>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    </nova:instance>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:  </metadata>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:  <sysinfo type="smbios">
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <system>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <entry name="manufacturer">RDO</entry>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <entry name="product">OpenStack Compute</entry>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <entry name="serial">2af0747d-d588-49b0-acfd-16748a1e4153</entry>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <entry name="uuid">2af0747d-d588-49b0-acfd-16748a1e4153</entry>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <entry name="family">Virtual Machine</entry>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    </system>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:  </sysinfo>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:  <os>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <boot dev="hd"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <smbios mode="sysinfo"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:  </os>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:  <features>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <acpi/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <apic/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <vmcoreinfo/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:  </features>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:  <clock offset="utc">
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <timer name="pit" tickpolicy="delay"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <timer name="hpet" present="no"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:  </clock>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:  <cpu mode="host-model" match="exact">
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <topology sockets="1" cores="1" threads="1"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:  </cpu>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:  <devices>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <disk type="file" device="disk">
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <driver name="qemu" type="qcow2" cache="none"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <target dev="vda" bus="virtio"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    </disk>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <disk type="file" device="cdrom">
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <driver name="qemu" type="raw" cache="none"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk.config"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <target dev="sda" bus="sata"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    </disk>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <interface type="ethernet">
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <mac address="fa:16:3e:9c:46:33"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <driver name="vhost" rx_queue_size="512"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <mtu size="1442"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <target dev="tap5cb75e75-71"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    </interface>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <serial type="pty">
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <log file="/var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/console.log" append="off"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    </serial>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <video>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    </video>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <input type="tablet" bus="usb"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <rng model="virtio">
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <backend model="random">/dev/urandom</backend>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    </rng>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <controller type="usb" index="0"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    <memballoon model="virtio">
Oct 13 12:11:51 np0005485008 nova_compute[192512]:      <stats period="10"/>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:    </memballoon>
Oct 13 12:11:51 np0005485008 nova_compute[192512]:  </devices>
Oct 13 12:11:51 np0005485008 nova_compute[192512]: </domain>
Oct 13 12:11:51 np0005485008 nova_compute[192512]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.931 2 DEBUG nova.compute.manager [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Preparing to wait for external event network-vif-plugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.931 2 DEBUG oslo_concurrency.lockutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Acquiring lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.932 2 DEBUG oslo_concurrency.lockutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.932 2 DEBUG oslo_concurrency.lockutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.933 2 DEBUG nova.virt.libvirt.vif [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T16:11:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteVmWorkloadBalanceStrategy-server-611168433',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutevmworkloadbalancestrategy-server-611168433',id=26,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='83c033c913624d5dba20fa5669b0e083',ramdisk_id='',reservation_id='r-cuq71qen',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteVmWorkloadBalanceStrategy-1762058278',owner_user_name='tempest-TestExecuteVmWorkloadBalanceStrategy-1762058278-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T16:11:49Z,user_data=None,user_id='522b0644eef543dc8e89708621200039',uuid=2af0747d-d588-49b0-acfd-16748a1e4153,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "address": "fa:16:3e:9c:46:33", "network": {"id": "2da7d8dc-53a9-4df2-a3fc-227bd054dd7d", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-122318613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f23dde321dee4009a8bb63ceb5de355e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cb75e75-71", "ovs_interfaceid": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.933 2 DEBUG nova.network.os_vif_util [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Converting VIF {"id": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "address": "fa:16:3e:9c:46:33", "network": {"id": "2da7d8dc-53a9-4df2-a3fc-227bd054dd7d", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-122318613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f23dde321dee4009a8bb63ceb5de355e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cb75e75-71", "ovs_interfaceid": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.934 2 DEBUG nova.network.os_vif_util [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:46:33,bridge_name='br-int',has_traffic_filtering=True,id=5cb75e75-7188-4375-a206-eecc6c7b7ba1,network=Network(2da7d8dc-53a9-4df2-a3fc-227bd054dd7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cb75e75-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.935 2 DEBUG os_vif [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:46:33,bridge_name='br-int',has_traffic_filtering=True,id=5cb75e75-7188-4375-a206-eecc6c7b7ba1,network=Network(2da7d8dc-53a9-4df2-a3fc-227bd054dd7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cb75e75-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.936 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.937 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.941 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cb75e75-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.941 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5cb75e75-71, col_values=(('external_ids', {'iface-id': '5cb75e75-7188-4375-a206-eecc6c7b7ba1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:46:33', 'vm-uuid': '2af0747d-d588-49b0-acfd-16748a1e4153'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:51 np0005485008 NetworkManager[51587]: <info>  [1760371911.9456] manager: (tap5cb75e75-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:51 np0005485008 nova_compute[192512]: 2025-10-13 16:11:51.954 2 INFO os_vif [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:46:33,bridge_name='br-int',has_traffic_filtering=True,id=5cb75e75-7188-4375-a206-eecc6c7b7ba1,network=Network(2da7d8dc-53a9-4df2-a3fc-227bd054dd7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cb75e75-71')#033[00m
Oct 13 12:11:52 np0005485008 nova_compute[192512]: 2025-10-13 16:11:52.030 2 DEBUG nova.virt.libvirt.driver [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 12:11:52 np0005485008 nova_compute[192512]: 2025-10-13 16:11:52.030 2 DEBUG nova.virt.libvirt.driver [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 12:11:52 np0005485008 nova_compute[192512]: 2025-10-13 16:11:52.031 2 DEBUG nova.virt.libvirt.driver [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] No VIF found with MAC fa:16:3e:9c:46:33, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 13 12:11:52 np0005485008 nova_compute[192512]: 2025-10-13 16:11:52.031 2 INFO nova.virt.libvirt.driver [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Using config drive#033[00m
Oct 13 12:11:52 np0005485008 nova_compute[192512]: 2025-10-13 16:11:52.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:52 np0005485008 nova_compute[192512]: 2025-10-13 16:11:52.499 2 INFO nova.virt.libvirt.driver [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Creating config drive at /var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk.config#033[00m
Oct 13 12:11:52 np0005485008 nova_compute[192512]: 2025-10-13 16:11:52.510 2 DEBUG oslo_concurrency.processutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_y5j3h2q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:11:52 np0005485008 nova_compute[192512]: 2025-10-13 16:11:52.647 2 DEBUG oslo_concurrency.processutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_y5j3h2q" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:11:52 np0005485008 kernel: tap5cb75e75-71: entered promiscuous mode
Oct 13 12:11:52 np0005485008 nova_compute[192512]: 2025-10-13 16:11:52.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:52 np0005485008 ovn_controller[94758]: 2025-10-13T16:11:52Z|00276|binding|INFO|Claiming lport 5cb75e75-7188-4375-a206-eecc6c7b7ba1 for this chassis.
Oct 13 12:11:52 np0005485008 ovn_controller[94758]: 2025-10-13T16:11:52Z|00277|binding|INFO|5cb75e75-7188-4375-a206-eecc6c7b7ba1: Claiming fa:16:3e:9c:46:33 10.100.0.8
Oct 13 12:11:52 np0005485008 NetworkManager[51587]: <info>  [1760371912.7226] manager: (tap5cb75e75-71): new Tun device (/org/freedesktop/NetworkManager/Devices/98)
Oct 13 12:11:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:52.731 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:46:33 10.100.0.8'], port_security=['fa:16:3e:9c:46:33 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '2af0747d-d588-49b0-acfd-16748a1e4153', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '83c033c913624d5dba20fa5669b0e083', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fef33bcc-d87a-449f-ae48-5109c2f8219a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=50195e55-7f0b-4829-a862-f2b636709cfc, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=5cb75e75-7188-4375-a206-eecc6c7b7ba1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:11:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:52.733 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 5cb75e75-7188-4375-a206-eecc6c7b7ba1 in datapath 2da7d8dc-53a9-4df2-a3fc-227bd054dd7d bound to our chassis#033[00m
Oct 13 12:11:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:52.734 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2da7d8dc-53a9-4df2-a3fc-227bd054dd7d#033[00m
Oct 13 12:11:52 np0005485008 systemd-udevd[225996]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 12:11:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:52.750 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[c39b751f-f912-4d67-a087-af5765150177]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:11:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:52.753 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2da7d8dc-51 in ovnmeta-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 13 12:11:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:52.755 214965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2da7d8dc-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 13 12:11:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:52.755 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[1857cc3c-7441-45e6-9bb4-a4e91ead4841]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:11:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:52.756 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[99bc352a-7893-4112-bba8-f856e7a34fe5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:11:52 np0005485008 NetworkManager[51587]: <info>  [1760371912.7643] device (tap5cb75e75-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 12:11:52 np0005485008 NetworkManager[51587]: <info>  [1760371912.7660] device (tap5cb75e75-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 12:11:52 np0005485008 systemd-machined[152551]: New machine qemu-23-instance-0000001a.
Oct 13 12:11:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:52.778 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[f3de371e-b386-4b09-970f-8d244889c010]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:11:52 np0005485008 nova_compute[192512]: 2025-10-13 16:11:52.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:52 np0005485008 systemd[1]: Started Virtual Machine qemu-23-instance-0000001a.
Oct 13 12:11:52 np0005485008 ovn_controller[94758]: 2025-10-13T16:11:52Z|00278|binding|INFO|Setting lport 5cb75e75-7188-4375-a206-eecc6c7b7ba1 ovn-installed in OVS
Oct 13 12:11:52 np0005485008 ovn_controller[94758]: 2025-10-13T16:11:52Z|00279|binding|INFO|Setting lport 5cb75e75-7188-4375-a206-eecc6c7b7ba1 up in Southbound
Oct 13 12:11:52 np0005485008 nova_compute[192512]: 2025-10-13 16:11:52.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:52.805 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[d6145622-bb8c-42a0-9f2e-850add93caab]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:11:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:52.844 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[d018c345-588c-4441-a17c-4e8eab923bc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:11:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:52.851 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[eaa95089-e54d-4a6c-97c0-6c72b54d39c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:11:52 np0005485008 NetworkManager[51587]: <info>  [1760371912.8531] manager: (tap2da7d8dc-50): new Veth device (/org/freedesktop/NetworkManager/Devices/99)
Oct 13 12:11:52 np0005485008 systemd-udevd[226000]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 12:11:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:52.887 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[74bcec3f-8c84-4edd-9546-b6cf4979e836]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:11:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:52.890 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e0ac37-a938-46eb-a0b1-efbd614660e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:11:52 np0005485008 NetworkManager[51587]: <info>  [1760371912.9215] device (tap2da7d8dc-50): carrier: link connected
Oct 13 12:11:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:52.928 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[3cf6c1b9-881b-4498-a6df-7710edba2d8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:11:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:52.953 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[66ab74a3-1453-4843-b435-d158bc5d5a39]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2da7d8dc-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:50:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545058, 'reachable_time': 44941, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 226029, 'error': None, 'target': 'ovnmeta-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:11:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:52.971 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[af88d0cd-77f1-4720-8601-a61513ff72a0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febc:50b9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545058, 'tstamp': 545058}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 226030, 'error': None, 'target': 'ovnmeta-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:11:52 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:52.992 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[6a5574e7-9d7f-4ada-8229-99c668da93aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2da7d8dc-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:50:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545058, 'reachable_time': 44941, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 226031, 'error': None, 'target': 'ovnmeta-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.009 2 DEBUG nova.compute.manager [req-6b04b075-8af2-4560-849c-cc0f714b8033 req-9bdf4fd9-7724-426a-8523-3bb6bca486d9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Received event network-vif-plugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.010 2 DEBUG oslo_concurrency.lockutils [req-6b04b075-8af2-4560-849c-cc0f714b8033 req-9bdf4fd9-7724-426a-8523-3bb6bca486d9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.011 2 DEBUG oslo_concurrency.lockutils [req-6b04b075-8af2-4560-849c-cc0f714b8033 req-9bdf4fd9-7724-426a-8523-3bb6bca486d9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.011 2 DEBUG oslo_concurrency.lockutils [req-6b04b075-8af2-4560-849c-cc0f714b8033 req-9bdf4fd9-7724-426a-8523-3bb6bca486d9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.011 2 DEBUG nova.compute.manager [req-6b04b075-8af2-4560-849c-cc0f714b8033 req-9bdf4fd9-7724-426a-8523-3bb6bca486d9 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Processing event network-vif-plugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:53.040 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[c9ecbfca-8b4a-47bd-a2eb-c047bd4c4ad9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:53.125 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[7a7846c9-72eb-49ff-8e90-e078ec3cbeb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:53.126 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2da7d8dc-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:53.127 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:53.127 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2da7d8dc-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:11:53 np0005485008 NetworkManager[51587]: <info>  [1760371913.1306] manager: (tap2da7d8dc-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:53 np0005485008 kernel: tap2da7d8dc-50: entered promiscuous mode
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:53.139 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2da7d8dc-50, col_values=(('external_ids', {'iface-id': '4c7ac051-1dc8-4e0e-baf3-2eb2144fc17a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:11:53 np0005485008 ovn_controller[94758]: 2025-10-13T16:11:53Z|00280|binding|INFO|Releasing lport 4c7ac051-1dc8-4e0e-baf3-2eb2144fc17a from this chassis (sb_readonly=0)
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:53.153 103642 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2da7d8dc-53a9-4df2-a3fc-227bd054dd7d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2da7d8dc-53a9-4df2-a3fc-227bd054dd7d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:53.154 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[de4caabd-7fbb-48ba-8ff3-0af5dd2aa055]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:53.155 103642 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]: global
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]:    log         /dev/log local0 debug
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]:    log-tag     haproxy-metadata-proxy-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]:    user        root
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]:    group       root
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]:    maxconn     1024
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]:    pidfile     /var/lib/neutron/external/pids/2da7d8dc-53a9-4df2-a3fc-227bd054dd7d.pid.haproxy
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]:    daemon
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]: defaults
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]:    log global
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]:    mode http
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]:    option httplog
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]:    option dontlognull
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]:    option http-server-close
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]:    option forwardfor
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]:    retries                 3
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]:    timeout http-request    30s
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]:    timeout connect         30s
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]:    timeout client          32s
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]:    timeout server          32s
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]:    timeout http-keep-alive 30s
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]: listen listener
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]:    bind 169.254.169.254:80
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]:    server metadata /var/lib/neutron/metadata_proxy
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]:    http-request add-header X-OVN-Network-ID 2da7d8dc-53a9-4df2-a3fc-227bd054dd7d
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:53.156 103642 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d', 'env', 'PROCESS_TAG=haproxy-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2da7d8dc-53a9-4df2-a3fc-227bd054dd7d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.223 2 DEBUG nova.network.neutron [req-6d11c269-f141-4618-a187-4c2902e55d15 req-0c467f46-d33b-41b3-8148-79afc4abd237 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Updated VIF entry in instance network info cache for port 5cb75e75-7188-4375-a206-eecc6c7b7ba1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.224 2 DEBUG nova.network.neutron [req-6d11c269-f141-4618-a187-4c2902e55d15 req-0c467f46-d33b-41b3-8148-79afc4abd237 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Updating instance_info_cache with network_info: [{"id": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "address": "fa:16:3e:9c:46:33", "network": {"id": "2da7d8dc-53a9-4df2-a3fc-227bd054dd7d", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-122318613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f23dde321dee4009a8bb63ceb5de355e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cb75e75-71", "ovs_interfaceid": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:53.245 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.254 2 DEBUG oslo_concurrency.lockutils [req-6d11c269-f141-4618-a187-4c2902e55d15 req-0c467f46-d33b-41b3-8148-79afc4abd237 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-2af0747d-d588-49b0-acfd-16748a1e4153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:11:53 np0005485008 podman[226070]: 2025-10-13 16:11:53.589698755 +0000 UTC m=+0.055036745 container create e3ec3ff36ed4291408f2944fc0ca00070ef60d9bc612c4e3156ef573f8fb7990 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 12:11:53 np0005485008 systemd[1]: Started libpod-conmon-e3ec3ff36ed4291408f2944fc0ca00070ef60d9bc612c4e3156ef573f8fb7990.scope.
Oct 13 12:11:53 np0005485008 podman[226070]: 2025-10-13 16:11:53.558349796 +0000 UTC m=+0.023687806 image pull f5d8e43d5fd113645e71c7e8980543c233298a7561a4c95ab3af27cb119e70b7 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 13 12:11:53 np0005485008 systemd[1]: Started libcrun container.
Oct 13 12:11:53 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda78c571d57831532ae937eca27b181474a29ef4201edad52b415468529b47a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 12:11:53 np0005485008 podman[226070]: 2025-10-13 16:11:53.697686154 +0000 UTC m=+0.163024174 container init e3ec3ff36ed4291408f2944fc0ca00070ef60d9bc612c4e3156ef573f8fb7990 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 13 12:11:53 np0005485008 podman[226070]: 2025-10-13 16:11:53.710050608 +0000 UTC m=+0.175388578 container start e3ec3ff36ed4291408f2944fc0ca00070ef60d9bc612c4e3156ef573f8fb7990 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 12:11:53 np0005485008 neutron-haproxy-ovnmeta-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d[226086]: [NOTICE]   (226090) : New worker (226092) forked
Oct 13 12:11:53 np0005485008 neutron-haproxy-ovnmeta-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d[226086]: [NOTICE]   (226090) : Loading success.
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.746 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760371913.7457387, 2af0747d-d588-49b0-acfd-16748a1e4153 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.747 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] VM Started (Lifecycle Event)#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.749 2 DEBUG nova.compute.manager [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.756 2 DEBUG nova.virt.libvirt.driver [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.760 2 INFO nova.virt.libvirt.driver [-] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Instance spawned successfully.#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.760 2 DEBUG nova.virt.libvirt.driver [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.766 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.769 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.778 2 DEBUG nova.virt.libvirt.driver [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.778 2 DEBUG nova.virt.libvirt.driver [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.778 2 DEBUG nova.virt.libvirt.driver [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.779 2 DEBUG nova.virt.libvirt.driver [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.779 2 DEBUG nova.virt.libvirt.driver [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.779 2 DEBUG nova.virt.libvirt.driver [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.785 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.786 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760371913.7458892, 2af0747d-d588-49b0-acfd-16748a1e4153 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.786 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] VM Paused (Lifecycle Event)#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.810 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.814 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760371913.7515225, 2af0747d-d588-49b0-acfd-16748a1e4153 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.815 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] VM Resumed (Lifecycle Event)#033[00m
Oct 13 12:11:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:53.822 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.841 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.847 2 INFO nova.compute.manager [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Took 4.57 seconds to spawn the instance on the hypervisor.#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.847 2 DEBUG nova.compute.manager [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.848 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.876 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.909 2 INFO nova.compute.manager [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Took 5.27 seconds to build instance.#033[00m
Oct 13 12:11:53 np0005485008 nova_compute[192512]: 2025-10-13 16:11:53.930 2 DEBUG oslo_concurrency.lockutils [None req-952e814e-384b-4a76-8d24-20321d0cfe66 522b0644eef543dc8e89708621200039 83c033c913624d5dba20fa5669b0e083 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.362s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:11:55 np0005485008 nova_compute[192512]: 2025-10-13 16:11:55.117 2 DEBUG nova.compute.manager [req-94cd44b5-dc8f-4159-8937-ed17722e7fbb req-047fbb1f-83a3-4e06-810d-bb3f440e866d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Received event network-vif-plugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:11:55 np0005485008 nova_compute[192512]: 2025-10-13 16:11:55.118 2 DEBUG oslo_concurrency.lockutils [req-94cd44b5-dc8f-4159-8937-ed17722e7fbb req-047fbb1f-83a3-4e06-810d-bb3f440e866d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:11:55 np0005485008 nova_compute[192512]: 2025-10-13 16:11:55.119 2 DEBUG oslo_concurrency.lockutils [req-94cd44b5-dc8f-4159-8937-ed17722e7fbb req-047fbb1f-83a3-4e06-810d-bb3f440e866d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:11:55 np0005485008 nova_compute[192512]: 2025-10-13 16:11:55.119 2 DEBUG oslo_concurrency.lockutils [req-94cd44b5-dc8f-4159-8937-ed17722e7fbb req-047fbb1f-83a3-4e06-810d-bb3f440e866d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:11:55 np0005485008 nova_compute[192512]: 2025-10-13 16:11:55.120 2 DEBUG nova.compute.manager [req-94cd44b5-dc8f-4159-8937-ed17722e7fbb req-047fbb1f-83a3-4e06-810d-bb3f440e866d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] No waiting events found dispatching network-vif-plugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:11:55 np0005485008 nova_compute[192512]: 2025-10-13 16:11:55.120 2 WARNING nova.compute.manager [req-94cd44b5-dc8f-4159-8937-ed17722e7fbb req-047fbb1f-83a3-4e06-810d-bb3f440e866d 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Received unexpected event network-vif-plugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 for instance with vm_state active and task_state None.#033[00m
Oct 13 12:11:55 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:11:55.825 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:11:56 np0005485008 nova_compute[192512]: 2025-10-13 16:11:56.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:11:57 np0005485008 nova_compute[192512]: 2025-10-13 16:11:57.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:12:01 np0005485008 podman[226102]: 2025-10-13 16:12:01.790396916 +0000 UTC m=+0.080346080 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.7, release=1755695350, vcs-type=git, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal)
Oct 13 12:12:01 np0005485008 nova_compute[192512]: 2025-10-13 16:12:01.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:12:02 np0005485008 nova_compute[192512]: 2025-10-13 16:12:02.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:12:05 np0005485008 podman[202884]: time="2025-10-13T16:12:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:12:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:12:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 12:12:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:12:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3473 "" "Go-http-client/1.1"
Oct 13 12:12:05 np0005485008 ovn_controller[94758]: 2025-10-13T16:12:05Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9c:46:33 10.100.0.8
Oct 13 12:12:05 np0005485008 ovn_controller[94758]: 2025-10-13T16:12:05Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9c:46:33 10.100.0.8
Oct 13 12:12:06 np0005485008 nova_compute[192512]: 2025-10-13 16:12:06.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:12:07 np0005485008 nova_compute[192512]: 2025-10-13 16:12:07.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:12:11 np0005485008 nova_compute[192512]: 2025-10-13 16:12:11.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:12:12 np0005485008 nova_compute[192512]: 2025-10-13 16:12:12.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:12:16 np0005485008 nova_compute[192512]: 2025-10-13 16:12:16.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:12:17 np0005485008 nova_compute[192512]: 2025-10-13 16:12:17.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:12:18 np0005485008 podman[226138]: 2025-10-13 16:12:18.769606633 +0000 UTC m=+0.069236666 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 12:12:18 np0005485008 podman[226139]: 2025-10-13 16:12:18.786399288 +0000 UTC m=+0.081533047 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 12:12:18 np0005485008 podman[226141]: 2025-10-13 16:12:18.786513152 +0000 UTC m=+0.074026128 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 12:12:18 np0005485008 podman[226140]: 2025-10-13 16:12:18.787750872 +0000 UTC m=+0.078654747 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 12:12:18 np0005485008 podman[226147]: 2025-10-13 16:12:18.850393237 +0000 UTC m=+0.134827895 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 12:12:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:12:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:12:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:12:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:12:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:12:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:12:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:12:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:12:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:12:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:12:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:12:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:12:21 np0005485008 nova_compute[192512]: 2025-10-13 16:12:21.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:12:22 np0005485008 nova_compute[192512]: 2025-10-13 16:12:22.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:12:23 np0005485008 ovn_controller[94758]: 2025-10-13T16:12:23Z|00281|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Oct 13 12:12:25 np0005485008 nova_compute[192512]: 2025-10-13 16:12:25.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:12:25 np0005485008 nova_compute[192512]: 2025-10-13 16:12:25.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:12:25 np0005485008 nova_compute[192512]: 2025-10-13 16:12:25.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:12:26 np0005485008 nova_compute[192512]: 2025-10-13 16:12:26.424 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:12:26 np0005485008 nova_compute[192512]: 2025-10-13 16:12:26.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:12:27 np0005485008 nova_compute[192512]: 2025-10-13 16:12:27.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:12:28 np0005485008 nova_compute[192512]: 2025-10-13 16:12:28.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:12:29 np0005485008 nova_compute[192512]: 2025-10-13 16:12:29.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:12:31 np0005485008 nova_compute[192512]: 2025-10-13 16:12:31.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:12:31 np0005485008 nova_compute[192512]: 2025-10-13 16:12:31.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:12:31 np0005485008 nova_compute[192512]: 2025-10-13 16:12:31.430 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:12:31 np0005485008 nova_compute[192512]: 2025-10-13 16:12:31.635 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "refresh_cache-2af0747d-d588-49b0-acfd-16748a1e4153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:12:31 np0005485008 nova_compute[192512]: 2025-10-13 16:12:31.636 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquired lock "refresh_cache-2af0747d-d588-49b0-acfd-16748a1e4153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:12:31 np0005485008 nova_compute[192512]: 2025-10-13 16:12:31.636 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 13 12:12:31 np0005485008 nova_compute[192512]: 2025-10-13 16:12:31.636 2 DEBUG nova.objects.instance [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2af0747d-d588-49b0-acfd-16748a1e4153 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:12:31 np0005485008 nova_compute[192512]: 2025-10-13 16:12:31.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:12:32 np0005485008 nova_compute[192512]: 2025-10-13 16:12:32.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:12:32 np0005485008 podman[226240]: 2025-10-13 16:12:32.754090769 +0000 UTC m=+0.060654722 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Oct 13 12:12:33 np0005485008 nova_compute[192512]: 2025-10-13 16:12:33.721 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Updating instance_info_cache with network_info: [{"id": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "address": "fa:16:3e:9c:46:33", "network": {"id": "2da7d8dc-53a9-4df2-a3fc-227bd054dd7d", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-122318613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f23dde321dee4009a8bb63ceb5de355e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cb75e75-71", "ovs_interfaceid": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:12:33 np0005485008 nova_compute[192512]: 2025-10-13 16:12:33.739 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Releasing lock "refresh_cache-2af0747d-d588-49b0-acfd-16748a1e4153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:12:33 np0005485008 nova_compute[192512]: 2025-10-13 16:12:33.739 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 13 12:12:33 np0005485008 nova_compute[192512]: 2025-10-13 16:12:33.739 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:12:33 np0005485008 nova_compute[192512]: 2025-10-13 16:12:33.740 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:12:33 np0005485008 nova_compute[192512]: 2025-10-13 16:12:33.760 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:12:33 np0005485008 nova_compute[192512]: 2025-10-13 16:12:33.760 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:12:33 np0005485008 nova_compute[192512]: 2025-10-13 16:12:33.761 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:12:33 np0005485008 nova_compute[192512]: 2025-10-13 16:12:33.761 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:12:33 np0005485008 nova_compute[192512]: 2025-10-13 16:12:33.822 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:12:33 np0005485008 nova_compute[192512]: 2025-10-13 16:12:33.883 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:12:33 np0005485008 nova_compute[192512]: 2025-10-13 16:12:33.884 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:12:33 np0005485008 nova_compute[192512]: 2025-10-13 16:12:33.966 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:12:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:12:33.978 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:12:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:12:33.980 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:12:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:12:33.981 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:12:34 np0005485008 nova_compute[192512]: 2025-10-13 16:12:34.133 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:12:34 np0005485008 nova_compute[192512]: 2025-10-13 16:12:34.135 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5696MB free_disk=73.43448257446289GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:12:34 np0005485008 nova_compute[192512]: 2025-10-13 16:12:34.135 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:12:34 np0005485008 nova_compute[192512]: 2025-10-13 16:12:34.135 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:12:34 np0005485008 nova_compute[192512]: 2025-10-13 16:12:34.226 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance 2af0747d-d588-49b0-acfd-16748a1e4153 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 13 12:12:34 np0005485008 nova_compute[192512]: 2025-10-13 16:12:34.227 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:12:34 np0005485008 nova_compute[192512]: 2025-10-13 16:12:34.227 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:12:34 np0005485008 nova_compute[192512]: 2025-10-13 16:12:34.275 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:12:34 np0005485008 nova_compute[192512]: 2025-10-13 16:12:34.293 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:12:34 np0005485008 nova_compute[192512]: 2025-10-13 16:12:34.315 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:12:34 np0005485008 nova_compute[192512]: 2025-10-13 16:12:34.316 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:12:35 np0005485008 nova_compute[192512]: 2025-10-13 16:12:35.004 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:12:35 np0005485008 podman[202884]: time="2025-10-13T16:12:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:12:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:12:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 12:12:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:12:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3474 "" "Go-http-client/1.1"
Oct 13 12:12:36 np0005485008 nova_compute[192512]: 2025-10-13 16:12:36.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:12:37 np0005485008 nova_compute[192512]: 2025-10-13 16:12:37.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:12:41 np0005485008 nova_compute[192512]: 2025-10-13 16:12:41.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:12:42 np0005485008 nova_compute[192512]: 2025-10-13 16:12:42.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:12:47 np0005485008 nova_compute[192512]: 2025-10-13 16:12:47.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:12:47 np0005485008 nova_compute[192512]: 2025-10-13 16:12:47.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:12:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:12:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:12:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:12:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:12:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:12:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:12:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:12:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:12:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:12:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:12:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:12:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:12:49 np0005485008 podman[226271]: 2025-10-13 16:12:49.765877445 +0000 UTC m=+0.069260218 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Oct 13 12:12:49 np0005485008 podman[226279]: 2025-10-13 16:12:49.783598339 +0000 UTC m=+0.068322927 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 12:12:49 np0005485008 podman[226273]: 2025-10-13 16:12:49.783738403 +0000 UTC m=+0.076051813 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 12:12:49 np0005485008 podman[226272]: 2025-10-13 16:12:49.796154589 +0000 UTC m=+0.092371643 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid)
Oct 13 12:12:49 np0005485008 podman[226284]: 2025-10-13 16:12:49.819788871 +0000 UTC m=+0.102501665 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 13 12:12:52 np0005485008 nova_compute[192512]: 2025-10-13 16:12:52.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:12:52 np0005485008 nova_compute[192512]: 2025-10-13 16:12:52.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:12:57 np0005485008 nova_compute[192512]: 2025-10-13 16:12:57.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:12:57 np0005485008 nova_compute[192512]: 2025-10-13 16:12:57.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:02 np0005485008 nova_compute[192512]: 2025-10-13 16:13:02.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:02 np0005485008 nova_compute[192512]: 2025-10-13 16:13:02.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:03 np0005485008 podman[226377]: 2025-10-13 16:13:03.753613322 +0000 UTC m=+0.056724608 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-type=git, io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., name=ubi9-minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 13 12:13:05 np0005485008 podman[202884]: time="2025-10-13T16:13:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:13:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:13:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 12:13:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:13:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3476 "" "Go-http-client/1.1"
Oct 13 12:13:07 np0005485008 nova_compute[192512]: 2025-10-13 16:13:07.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:07 np0005485008 nova_compute[192512]: 2025-10-13 16:13:07.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:12 np0005485008 nova_compute[192512]: 2025-10-13 16:13:12.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:12 np0005485008 nova_compute[192512]: 2025-10-13 16:13:12.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:17 np0005485008 nova_compute[192512]: 2025-10-13 16:13:17.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:17 np0005485008 nova_compute[192512]: 2025-10-13 16:13:17.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:13:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:13:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:13:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:13:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:13:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:13:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:13:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:13:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:13:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:13:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:13:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:13:20 np0005485008 podman[226416]: 2025-10-13 16:13:20.783407721 +0000 UTC m=+0.065683783 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 12:13:20 np0005485008 podman[226413]: 2025-10-13 16:13:20.789602688 +0000 UTC m=+0.082818838 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 13 12:13:20 np0005485008 podman[226414]: 2025-10-13 16:13:20.798344267 +0000 UTC m=+0.086938910 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 12:13:20 np0005485008 podman[226415]: 2025-10-13 16:13:20.808334694 +0000 UTC m=+0.092949270 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 13 12:13:20 np0005485008 podman[226422]: 2025-10-13 16:13:20.821951378 +0000 UTC m=+0.096635448 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 13 12:13:22 np0005485008 nova_compute[192512]: 2025-10-13 16:13:22.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:22 np0005485008 nova_compute[192512]: 2025-10-13 16:13:22.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:22 np0005485008 nova_compute[192512]: 2025-10-13 16:13:22.926 2 DEBUG nova.compute.manager [None req-b44636d5-d783-48ff-8ec8-eb627ad0159d f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Adding trait COMPUTE_STATUS_DISABLED to compute node resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce in placement. update_compute_provider_status /usr/lib/python3.9/site-packages/nova/compute/manager.py:610#033[00m
Oct 13 12:13:22 np0005485008 nova_compute[192512]: 2025-10-13 16:13:22.997 2 DEBUG nova.compute.provider_tree [None req-b44636d5-d783-48ff-8ec8-eb627ad0159d f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Updating resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce generation from 25 to 29 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct 13 12:13:25 np0005485008 nova_compute[192512]: 2025-10-13 16:13:25.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:13:25 np0005485008 nova_compute[192512]: 2025-10-13 16:13:25.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:13:25 np0005485008 nova_compute[192512]: 2025-10-13 16:13:25.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:13:27 np0005485008 nova_compute[192512]: 2025-10-13 16:13:27.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:27 np0005485008 nova_compute[192512]: 2025-10-13 16:13:27.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:28 np0005485008 nova_compute[192512]: 2025-10-13 16:13:28.424 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:13:29 np0005485008 nova_compute[192512]: 2025-10-13 16:13:29.381 2 DEBUG nova.virt.libvirt.driver [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Check if temp file /var/lib/nova/instances/tmp5mqdi3t4 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Oct 13 12:13:29 np0005485008 nova_compute[192512]: 2025-10-13 16:13:29.382 2 DEBUG nova.compute.manager [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp5mqdi3t4',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='2af0747d-d588-49b0-acfd-16748a1e4153',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Oct 13 12:13:30 np0005485008 nova_compute[192512]: 2025-10-13 16:13:30.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:13:30 np0005485008 nova_compute[192512]: 2025-10-13 16:13:30.432 2 DEBUG oslo_concurrency.processutils [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:13:30 np0005485008 nova_compute[192512]: 2025-10-13 16:13:30.523 2 DEBUG oslo_concurrency.processutils [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:13:30 np0005485008 nova_compute[192512]: 2025-10-13 16:13:30.524 2 DEBUG oslo_concurrency.processutils [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:13:30 np0005485008 nova_compute[192512]: 2025-10-13 16:13:30.587 2 DEBUG oslo_concurrency.processutils [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:13:31 np0005485008 nova_compute[192512]: 2025-10-13 16:13:31.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:13:32 np0005485008 nova_compute[192512]: 2025-10-13 16:13:32.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:32 np0005485008 nova_compute[192512]: 2025-10-13 16:13:32.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:33 np0005485008 systemd[1]: Created slice User Slice of UID 42436.
Oct 13 12:13:33 np0005485008 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct 13 12:13:33 np0005485008 systemd-logind[784]: New session 38 of user nova.
Oct 13 12:13:33 np0005485008 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct 13 12:13:33 np0005485008 systemd[1]: Starting User Manager for UID 42436...
Oct 13 12:13:33 np0005485008 systemd[226525]: Queued start job for default target Main User Target.
Oct 13 12:13:33 np0005485008 systemd[226525]: Created slice User Application Slice.
Oct 13 12:13:33 np0005485008 systemd[226525]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 13 12:13:33 np0005485008 systemd[226525]: Started Daily Cleanup of User's Temporary Directories.
Oct 13 12:13:33 np0005485008 systemd[226525]: Reached target Paths.
Oct 13 12:13:33 np0005485008 systemd[226525]: Reached target Timers.
Oct 13 12:13:33 np0005485008 systemd[226525]: Starting D-Bus User Message Bus Socket...
Oct 13 12:13:33 np0005485008 systemd[226525]: Starting Create User's Volatile Files and Directories...
Oct 13 12:13:33 np0005485008 systemd[226525]: Listening on D-Bus User Message Bus Socket.
Oct 13 12:13:33 np0005485008 systemd[226525]: Reached target Sockets.
Oct 13 12:13:33 np0005485008 systemd[226525]: Finished Create User's Volatile Files and Directories.
Oct 13 12:13:33 np0005485008 systemd[226525]: Reached target Basic System.
Oct 13 12:13:33 np0005485008 systemd[226525]: Reached target Main User Target.
Oct 13 12:13:33 np0005485008 systemd[226525]: Startup finished in 163ms.
Oct 13 12:13:33 np0005485008 systemd[1]: Started User Manager for UID 42436.
Oct 13 12:13:33 np0005485008 systemd[1]: Started Session 38 of User nova.
Oct 13 12:13:33 np0005485008 nova_compute[192512]: 2025-10-13 16:13:33.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:13:33 np0005485008 nova_compute[192512]: 2025-10-13 16:13:33.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:13:33 np0005485008 nova_compute[192512]: 2025-10-13 16:13:33.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:13:33 np0005485008 nova_compute[192512]: 2025-10-13 16:13:33.452 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "refresh_cache-2af0747d-d588-49b0-acfd-16748a1e4153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:13:33 np0005485008 nova_compute[192512]: 2025-10-13 16:13:33.453 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquired lock "refresh_cache-2af0747d-d588-49b0-acfd-16748a1e4153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:13:33 np0005485008 nova_compute[192512]: 2025-10-13 16:13:33.453 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 13 12:13:33 np0005485008 nova_compute[192512]: 2025-10-13 16:13:33.453 2 DEBUG nova.objects.instance [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2af0747d-d588-49b0-acfd-16748a1e4153 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:13:33 np0005485008 systemd[1]: session-38.scope: Deactivated successfully.
Oct 13 12:13:33 np0005485008 systemd-logind[784]: Session 38 logged out. Waiting for processes to exit.
Oct 13 12:13:33 np0005485008 systemd-logind[784]: Removed session 38.
Oct 13 12:13:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:13:33.980 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:13:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:13:33.980 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:13:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:13:33.982 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:13:34 np0005485008 podman[226542]: 2025-10-13 16:13:34.785493763 +0000 UTC m=+0.083588539 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, vcs-type=git, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm)
Oct 13 12:13:35 np0005485008 podman[202884]: time="2025-10-13T16:13:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:13:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:13:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 12:13:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:13:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3477 "" "Go-http-client/1.1"
Oct 13 12:13:35 np0005485008 nova_compute[192512]: 2025-10-13 16:13:35.819 2 DEBUG nova.compute.manager [req-6648b263-db11-46cc-96e4-c3cf9dc952c1 req-9d708fe4-a35b-49d1-bcec-5307065901ce 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Received event network-vif-unplugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:13:35 np0005485008 nova_compute[192512]: 2025-10-13 16:13:35.819 2 DEBUG oslo_concurrency.lockutils [req-6648b263-db11-46cc-96e4-c3cf9dc952c1 req-9d708fe4-a35b-49d1-bcec-5307065901ce 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:13:35 np0005485008 nova_compute[192512]: 2025-10-13 16:13:35.820 2 DEBUG oslo_concurrency.lockutils [req-6648b263-db11-46cc-96e4-c3cf9dc952c1 req-9d708fe4-a35b-49d1-bcec-5307065901ce 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:13:35 np0005485008 nova_compute[192512]: 2025-10-13 16:13:35.820 2 DEBUG oslo_concurrency.lockutils [req-6648b263-db11-46cc-96e4-c3cf9dc952c1 req-9d708fe4-a35b-49d1-bcec-5307065901ce 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:13:35 np0005485008 nova_compute[192512]: 2025-10-13 16:13:35.820 2 DEBUG nova.compute.manager [req-6648b263-db11-46cc-96e4-c3cf9dc952c1 req-9d708fe4-a35b-49d1-bcec-5307065901ce 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] No waiting events found dispatching network-vif-unplugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:13:35 np0005485008 nova_compute[192512]: 2025-10-13 16:13:35.820 2 DEBUG nova.compute.manager [req-6648b263-db11-46cc-96e4-c3cf9dc952c1 req-9d708fe4-a35b-49d1-bcec-5307065901ce 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Received event network-vif-unplugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 12:13:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:13:36.623 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:13:36 np0005485008 nova_compute[192512]: 2025-10-13 16:13:36.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:36 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:13:36.625 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.598 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Updating instance_info_cache with network_info: [{"id": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "address": "fa:16:3e:9c:46:33", "network": {"id": "2da7d8dc-53a9-4df2-a3fc-227bd054dd7d", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-122318613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f23dde321dee4009a8bb63ceb5de355e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cb75e75-71", "ovs_interfaceid": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.616 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Releasing lock "refresh_cache-2af0747d-d588-49b0-acfd-16748a1e4153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.617 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.617 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.617 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.618 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.640 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.641 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.641 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.641 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.644 2 INFO nova.compute.manager [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Took 7.05 seconds for pre_live_migration on destination host compute-0.ctlplane.example.com.#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.644 2 DEBUG nova.compute.manager [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.660 2 DEBUG nova.compute.manager [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp5mqdi3t4',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='2af0747d-d588-49b0-acfd-16748a1e4153',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(4faf9de3-ed97-4919-9286-6150142cb79d),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.680 2 DEBUG nova.objects.instance [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'migration_context' on Instance uuid 2af0747d-d588-49b0-acfd-16748a1e4153 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.682 2 DEBUG nova.virt.libvirt.driver [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.684 2 DEBUG nova.virt.libvirt.driver [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.684 2 DEBUG nova.virt.libvirt.driver [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.709 2 DEBUG nova.virt.libvirt.vif [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T16:11:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteVmWorkloadBalanceStrategy-server-611168433',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutevmworkloadbalancestrategy-server-611168433',id=26,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T16:11:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='83c033c913624d5dba20fa5669b0e083',ramdisk_id='',reservation_id='r-cuq71qen',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteVmWorkloadBalanceStrategy-1762058278',owner_user_name='tempest-TestExecuteVmWorkloadBalanceStrategy-1762058278-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T16:11:53Z,user_data=None,user_id='522b0644eef543dc8e89708621200039',uuid=2af0747d-d588-49b0-acfd-16748a1e4153,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "address": "fa:16:3e:9c:46:33", "network": {"id": "2da7d8dc-53a9-4df2-a3fc-227bd054dd7d", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-122318613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f23dde321dee4009a8bb63ceb5de355e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap5cb75e75-71", "ovs_interfaceid": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.709 2 DEBUG nova.network.os_vif_util [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converting VIF {"id": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "address": "fa:16:3e:9c:46:33", "network": {"id": "2da7d8dc-53a9-4df2-a3fc-227bd054dd7d", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-122318613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f23dde321dee4009a8bb63ceb5de355e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap5cb75e75-71", "ovs_interfaceid": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.710 2 DEBUG nova.network.os_vif_util [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:46:33,bridge_name='br-int',has_traffic_filtering=True,id=5cb75e75-7188-4375-a206-eecc6c7b7ba1,network=Network(2da7d8dc-53a9-4df2-a3fc-227bd054dd7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cb75e75-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.710 2 DEBUG nova.virt.libvirt.migration [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Updating guest XML with vif config: <interface type="ethernet">
Oct 13 12:13:37 np0005485008 nova_compute[192512]:  <mac address="fa:16:3e:9c:46:33"/>
Oct 13 12:13:37 np0005485008 nova_compute[192512]:  <model type="virtio"/>
Oct 13 12:13:37 np0005485008 nova_compute[192512]:  <driver name="vhost" rx_queue_size="512"/>
Oct 13 12:13:37 np0005485008 nova_compute[192512]:  <mtu size="1442"/>
Oct 13 12:13:37 np0005485008 nova_compute[192512]:  <target dev="tap5cb75e75-71"/>
Oct 13 12:13:37 np0005485008 nova_compute[192512]: </interface>
Oct 13 12:13:37 np0005485008 nova_compute[192512]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.711 2 DEBUG nova.virt.libvirt.driver [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.714 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.772 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.773 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:13:37 np0005485008 nova_compute[192512]: 2025-10-13 16:13:37.835 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.032 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.034 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5690MB free_disk=73.43431854248047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.035 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.035 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.053 2 DEBUG nova.compute.manager [req-05f57592-1e64-4fd7-810a-0ed3bdb47832 req-d32bf077-41f0-4be7-a66b-d13e80481e5e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Received event network-vif-plugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.054 2 DEBUG oslo_concurrency.lockutils [req-05f57592-1e64-4fd7-810a-0ed3bdb47832 req-d32bf077-41f0-4be7-a66b-d13e80481e5e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.054 2 DEBUG oslo_concurrency.lockutils [req-05f57592-1e64-4fd7-810a-0ed3bdb47832 req-d32bf077-41f0-4be7-a66b-d13e80481e5e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.054 2 DEBUG oslo_concurrency.lockutils [req-05f57592-1e64-4fd7-810a-0ed3bdb47832 req-d32bf077-41f0-4be7-a66b-d13e80481e5e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.055 2 DEBUG nova.compute.manager [req-05f57592-1e64-4fd7-810a-0ed3bdb47832 req-d32bf077-41f0-4be7-a66b-d13e80481e5e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] No waiting events found dispatching network-vif-plugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.055 2 WARNING nova.compute.manager [req-05f57592-1e64-4fd7-810a-0ed3bdb47832 req-d32bf077-41f0-4be7-a66b-d13e80481e5e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Received unexpected event network-vif-plugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 for instance with vm_state active and task_state migrating.#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.055 2 DEBUG nova.compute.manager [req-05f57592-1e64-4fd7-810a-0ed3bdb47832 req-d32bf077-41f0-4be7-a66b-d13e80481e5e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Received event network-changed-5cb75e75-7188-4375-a206-eecc6c7b7ba1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.055 2 DEBUG nova.compute.manager [req-05f57592-1e64-4fd7-810a-0ed3bdb47832 req-d32bf077-41f0-4be7-a66b-d13e80481e5e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Refreshing instance network info cache due to event network-changed-5cb75e75-7188-4375-a206-eecc6c7b7ba1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.056 2 DEBUG oslo_concurrency.lockutils [req-05f57592-1e64-4fd7-810a-0ed3bdb47832 req-d32bf077-41f0-4be7-a66b-d13e80481e5e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-2af0747d-d588-49b0-acfd-16748a1e4153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.056 2 DEBUG oslo_concurrency.lockutils [req-05f57592-1e64-4fd7-810a-0ed3bdb47832 req-d32bf077-41f0-4be7-a66b-d13e80481e5e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-2af0747d-d588-49b0-acfd-16748a1e4153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.056 2 DEBUG nova.network.neutron [req-05f57592-1e64-4fd7-810a-0ed3bdb47832 req-d32bf077-41f0-4be7-a66b-d13e80481e5e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Refreshing network info cache for port 5cb75e75-7188-4375-a206-eecc6c7b7ba1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.088 2 INFO nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Updating resource usage from migration 4faf9de3-ed97-4919-9286-6150142cb79d#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.120 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Migration 4faf9de3-ed97-4919-9286-6150142cb79d is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.121 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.121 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.134 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing inventories for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.156 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating ProviderTree inventory for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.156 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating inventory in ProviderTree for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.170 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing aggregate associations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.186 2 DEBUG nova.virt.libvirt.migration [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Current None elapsed 0 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.186 2 INFO nova.virt.libvirt.migration [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.189 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing trait associations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce, traits: HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STATUS_DISABLED,HW_CPU_X86_SVM,HW_CPU_X86_SSE42,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,HW_CPU_X86_BMI2,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.235 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.255 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.261 2 INFO nova.virt.libvirt.driver [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.278 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.279 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.244s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.765 2 DEBUG nova.virt.libvirt.migration [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Current 50 elapsed 1 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct 13 12:13:38 np0005485008 nova_compute[192512]: 2025-10-13 16:13:38.765 2 DEBUG nova.virt.libvirt.migration [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct 13 12:13:39 np0005485008 nova_compute[192512]: 2025-10-13 16:13:39.267 2 DEBUG nova.virt.libvirt.migration [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Current 50 elapsed 1 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct 13 12:13:39 np0005485008 nova_compute[192512]: 2025-10-13 16:13:39.268 2 DEBUG nova.virt.libvirt.migration [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct 13 12:13:39 np0005485008 nova_compute[192512]: 2025-10-13 16:13:39.545 2 DEBUG nova.network.neutron [req-05f57592-1e64-4fd7-810a-0ed3bdb47832 req-d32bf077-41f0-4be7-a66b-d13e80481e5e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Updated VIF entry in instance network info cache for port 5cb75e75-7188-4375-a206-eecc6c7b7ba1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 13 12:13:39 np0005485008 nova_compute[192512]: 2025-10-13 16:13:39.546 2 DEBUG nova.network.neutron [req-05f57592-1e64-4fd7-810a-0ed3bdb47832 req-d32bf077-41f0-4be7-a66b-d13e80481e5e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Updating instance_info_cache with network_info: [{"id": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "address": "fa:16:3e:9c:46:33", "network": {"id": "2da7d8dc-53a9-4df2-a3fc-227bd054dd7d", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-122318613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f23dde321dee4009a8bb63ceb5de355e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cb75e75-71", "ovs_interfaceid": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-0.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:13:39 np0005485008 nova_compute[192512]: 2025-10-13 16:13:39.568 2 DEBUG oslo_concurrency.lockutils [req-05f57592-1e64-4fd7-810a-0ed3bdb47832 req-d32bf077-41f0-4be7-a66b-d13e80481e5e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-2af0747d-d588-49b0-acfd-16748a1e4153" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:13:39 np0005485008 nova_compute[192512]: 2025-10-13 16:13:39.772 2 DEBUG nova.virt.libvirt.migration [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Current 50 elapsed 2 steps [(0, 50), (300, 95), (600, 140), (900, 185), (1200, 230), (1500, 275), (1800, 320), (2100, 365), (2400, 410), (2700, 455), (3000, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct 13 12:13:39 np0005485008 nova_compute[192512]: 2025-10-13 16:13:39.773 2 DEBUG nova.virt.libvirt.migration [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct 13 12:13:39 np0005485008 nova_compute[192512]: 2025-10-13 16:13:39.862 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760372019.8620472, 2af0747d-d588-49b0-acfd-16748a1e4153 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:13:39 np0005485008 nova_compute[192512]: 2025-10-13 16:13:39.863 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] VM Paused (Lifecycle Event)#033[00m
Oct 13 12:13:39 np0005485008 nova_compute[192512]: 2025-10-13 16:13:39.883 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:13:39 np0005485008 nova_compute[192512]: 2025-10-13 16:13:39.888 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 12:13:39 np0005485008 nova_compute[192512]: 2025-10-13 16:13:39.909 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Oct 13 12:13:39 np0005485008 kernel: tap5cb75e75-71 (unregistering): left promiscuous mode
Oct 13 12:13:39 np0005485008 NetworkManager[51587]: <info>  [1760372019.9966] device (tap5cb75e75-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 12:13:40 np0005485008 nova_compute[192512]: 2025-10-13 16:13:40.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:40 np0005485008 ovn_controller[94758]: 2025-10-13T16:13:40Z|00282|binding|INFO|Releasing lport 5cb75e75-7188-4375-a206-eecc6c7b7ba1 from this chassis (sb_readonly=0)
Oct 13 12:13:40 np0005485008 ovn_controller[94758]: 2025-10-13T16:13:40Z|00283|binding|INFO|Setting lport 5cb75e75-7188-4375-a206-eecc6c7b7ba1 down in Southbound
Oct 13 12:13:40 np0005485008 ovn_controller[94758]: 2025-10-13T16:13:40Z|00284|binding|INFO|Removing iface tap5cb75e75-71 ovn-installed in OVS
Oct 13 12:13:40 np0005485008 nova_compute[192512]: 2025-10-13 16:13:40.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:13:40.052 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:46:33 10.100.0.8'], port_security=['fa:16:3e:9c:46:33 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '6236ce6f-4317-42ce-8c52-bcd579c0494a'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '2af0747d-d588-49b0-acfd-16748a1e4153', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '83c033c913624d5dba20fa5669b0e083', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'fef33bcc-d87a-449f-ae48-5109c2f8219a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=50195e55-7f0b-4829-a862-f2b636709cfc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=5cb75e75-7188-4375-a206-eecc6c7b7ba1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:13:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:13:40.054 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 5cb75e75-7188-4375-a206-eecc6c7b7ba1 in datapath 2da7d8dc-53a9-4df2-a3fc-227bd054dd7d unbound from our chassis#033[00m
Oct 13 12:13:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:13:40.055 103642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2da7d8dc-53a9-4df2-a3fc-227bd054dd7d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 13 12:13:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:13:40.057 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[ae271bcf-a251-43ca-8c2e-817513974fa3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:13:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:13:40.058 103642 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d namespace which is not needed anymore#033[00m
Oct 13 12:13:40 np0005485008 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Oct 13 12:13:40 np0005485008 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000001a.scope: Consumed 16.906s CPU time.
Oct 13 12:13:40 np0005485008 systemd-machined[152551]: Machine qemu-23-instance-0000001a terminated.
Oct 13 12:13:40 np0005485008 neutron-haproxy-ovnmeta-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d[226086]: [NOTICE]   (226090) : haproxy version is 2.8.14-c23fe91
Oct 13 12:13:40 np0005485008 neutron-haproxy-ovnmeta-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d[226086]: [NOTICE]   (226090) : path to executable is /usr/sbin/haproxy
Oct 13 12:13:40 np0005485008 neutron-haproxy-ovnmeta-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d[226086]: [WARNING]  (226090) : Exiting Master process...
Oct 13 12:13:40 np0005485008 neutron-haproxy-ovnmeta-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d[226086]: [WARNING]  (226090) : Exiting Master process...
Oct 13 12:13:40 np0005485008 neutron-haproxy-ovnmeta-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d[226086]: [ALERT]    (226090) : Current worker (226092) exited with code 143 (Terminated)
Oct 13 12:13:40 np0005485008 neutron-haproxy-ovnmeta-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d[226086]: [WARNING]  (226090) : All workers exited. Exiting... (0)
Oct 13 12:13:40 np0005485008 kernel: tap5cb75e75-71: entered promiscuous mode
Oct 13 12:13:40 np0005485008 kernel: tap5cb75e75-71 (unregistering): left promiscuous mode
Oct 13 12:13:40 np0005485008 systemd[1]: libpod-e3ec3ff36ed4291408f2944fc0ca00070ef60d9bc612c4e3156ef573f8fb7990.scope: Deactivated successfully.
Oct 13 12:13:40 np0005485008 NetworkManager[51587]: <info>  [1760372020.2020] manager: (tap5cb75e75-71): new Tun device (/org/freedesktop/NetworkManager/Devices/101)
Oct 13 12:13:40 np0005485008 podman[226611]: 2025-10-13 16:13:40.207888475 +0000 UTC m=+0.052111744 container died e3ec3ff36ed4291408f2944fc0ca00070ef60d9bc612c4e3156ef573f8fb7990 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 12:13:40 np0005485008 nova_compute[192512]: 2025-10-13 16:13:40.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:40 np0005485008 nova_compute[192512]: 2025-10-13 16:13:40.234 2 DEBUG nova.compute.manager [req-8db252c3-7014-4279-b498-1606749c1be4 req-18a80564-48e6-41ef-ae10-45585df47b29 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Received event network-vif-unplugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:13:40 np0005485008 nova_compute[192512]: 2025-10-13 16:13:40.235 2 DEBUG oslo_concurrency.lockutils [req-8db252c3-7014-4279-b498-1606749c1be4 req-18a80564-48e6-41ef-ae10-45585df47b29 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:13:40 np0005485008 nova_compute[192512]: 2025-10-13 16:13:40.236 2 DEBUG oslo_concurrency.lockutils [req-8db252c3-7014-4279-b498-1606749c1be4 req-18a80564-48e6-41ef-ae10-45585df47b29 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:13:40 np0005485008 nova_compute[192512]: 2025-10-13 16:13:40.236 2 DEBUG oslo_concurrency.lockutils [req-8db252c3-7014-4279-b498-1606749c1be4 req-18a80564-48e6-41ef-ae10-45585df47b29 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:13:40 np0005485008 nova_compute[192512]: 2025-10-13 16:13:40.236 2 DEBUG nova.compute.manager [req-8db252c3-7014-4279-b498-1606749c1be4 req-18a80564-48e6-41ef-ae10-45585df47b29 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] No waiting events found dispatching network-vif-unplugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:13:40 np0005485008 nova_compute[192512]: 2025-10-13 16:13:40.236 2 DEBUG nova.compute.manager [req-8db252c3-7014-4279-b498-1606749c1be4 req-18a80564-48e6-41ef-ae10-45585df47b29 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Received event network-vif-unplugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 12:13:40 np0005485008 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e3ec3ff36ed4291408f2944fc0ca00070ef60d9bc612c4e3156ef573f8fb7990-userdata-shm.mount: Deactivated successfully.
Oct 13 12:13:40 np0005485008 systemd[1]: var-lib-containers-storage-overlay-eda78c571d57831532ae937eca27b181474a29ef4201edad52b415468529b47a-merged.mount: Deactivated successfully.
Oct 13 12:13:40 np0005485008 nova_compute[192512]: 2025-10-13 16:13:40.249 2 DEBUG nova.virt.libvirt.driver [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Oct 13 12:13:40 np0005485008 nova_compute[192512]: 2025-10-13 16:13:40.249 2 DEBUG nova.virt.libvirt.driver [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Oct 13 12:13:40 np0005485008 nova_compute[192512]: 2025-10-13 16:13:40.249 2 DEBUG nova.virt.libvirt.driver [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Oct 13 12:13:40 np0005485008 podman[226611]: 2025-10-13 16:13:40.257121097 +0000 UTC m=+0.101344366 container cleanup e3ec3ff36ed4291408f2944fc0ca00070ef60d9bc612c4e3156ef573f8fb7990 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251009, io.buildah.version=1.41.3)
Oct 13 12:13:40 np0005485008 systemd[1]: libpod-conmon-e3ec3ff36ed4291408f2944fc0ca00070ef60d9bc612c4e3156ef573f8fb7990.scope: Deactivated successfully.
Oct 13 12:13:40 np0005485008 nova_compute[192512]: 2025-10-13 16:13:40.275 2 DEBUG nova.virt.libvirt.guest [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '2af0747d-d588-49b0-acfd-16748a1e4153' (instance-0000001a) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Oct 13 12:13:40 np0005485008 nova_compute[192512]: 2025-10-13 16:13:40.276 2 INFO nova.virt.libvirt.driver [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Migration operation has completed#033[00m
Oct 13 12:13:40 np0005485008 nova_compute[192512]: 2025-10-13 16:13:40.276 2 INFO nova.compute.manager [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] _post_live_migration() is started..#033[00m
Oct 13 12:13:40 np0005485008 podman[226657]: 2025-10-13 16:13:40.339387895 +0000 UTC m=+0.057939636 container remove e3ec3ff36ed4291408f2944fc0ca00070ef60d9bc612c4e3156ef573f8fb7990 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 13 12:13:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:13:40.345 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[3bada4fd-6257-4998-a50e-f30478cf9706]: (4, ('Mon Oct 13 04:13:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d (e3ec3ff36ed4291408f2944fc0ca00070ef60d9bc612c4e3156ef573f8fb7990)\ne3ec3ff36ed4291408f2944fc0ca00070ef60d9bc612c4e3156ef573f8fb7990\nMon Oct 13 04:13:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d (e3ec3ff36ed4291408f2944fc0ca00070ef60d9bc612c4e3156ef573f8fb7990)\ne3ec3ff36ed4291408f2944fc0ca00070ef60d9bc612c4e3156ef573f8fb7990\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:13:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:13:40.347 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[bf23e97b-3fb8-4398-befb-042879c32485]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:13:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:13:40.349 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2da7d8dc-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:13:40 np0005485008 kernel: tap2da7d8dc-50: left promiscuous mode
Oct 13 12:13:40 np0005485008 nova_compute[192512]: 2025-10-13 16:13:40.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:40 np0005485008 nova_compute[192512]: 2025-10-13 16:13:40.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:13:40.373 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[9b7f43b1-2bdc-4d8b-9142-85c77a964080]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:13:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:13:40.416 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c0119c-9d5c-44e3-b1cf-df8715797dc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:13:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:13:40.418 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[88150d19-6370-4573-93a5-0da03f43ee94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:13:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:13:40.443 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[179719b9-f425-47b4-857c-b31620319edf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545050, 'reachable_time': 30015, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 226675, 'error': None, 'target': 'ovnmeta-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:13:40 np0005485008 systemd[1]: run-netns-ovnmeta\x2d2da7d8dc\x2d53a9\x2d4df2\x2da3fc\x2d227bd054dd7d.mount: Deactivated successfully.
Oct 13 12:13:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:13:40.450 103757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2da7d8dc-53a9-4df2-a3fc-227bd054dd7d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 13 12:13:40 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:13:40.450 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[4fe5567f-fbe0-4816-b316-7e2a876faad6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:13:41 np0005485008 nova_compute[192512]: 2025-10-13 16:13:41.008 2 DEBUG nova.compute.manager [req-efecf4cd-df37-43a5-9aa0-e8c0b4f07bdd req-050aa030-6e09-4567-89b9-fbf5e88066a5 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Received event network-vif-unplugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:13:41 np0005485008 nova_compute[192512]: 2025-10-13 16:13:41.008 2 DEBUG oslo_concurrency.lockutils [req-efecf4cd-df37-43a5-9aa0-e8c0b4f07bdd req-050aa030-6e09-4567-89b9-fbf5e88066a5 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:13:41 np0005485008 nova_compute[192512]: 2025-10-13 16:13:41.009 2 DEBUG oslo_concurrency.lockutils [req-efecf4cd-df37-43a5-9aa0-e8c0b4f07bdd req-050aa030-6e09-4567-89b9-fbf5e88066a5 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:13:41 np0005485008 nova_compute[192512]: 2025-10-13 16:13:41.010 2 DEBUG oslo_concurrency.lockutils [req-efecf4cd-df37-43a5-9aa0-e8c0b4f07bdd req-050aa030-6e09-4567-89b9-fbf5e88066a5 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:13:41 np0005485008 nova_compute[192512]: 2025-10-13 16:13:41.010 2 DEBUG nova.compute.manager [req-efecf4cd-df37-43a5-9aa0-e8c0b4f07bdd req-050aa030-6e09-4567-89b9-fbf5e88066a5 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] No waiting events found dispatching network-vif-unplugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:13:41 np0005485008 nova_compute[192512]: 2025-10-13 16:13:41.010 2 DEBUG nova.compute.manager [req-efecf4cd-df37-43a5-9aa0-e8c0b4f07bdd req-050aa030-6e09-4567-89b9-fbf5e88066a5 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Received event network-vif-unplugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 12:13:41 np0005485008 nova_compute[192512]: 2025-10-13 16:13:41.068 2 DEBUG nova.network.neutron [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Activated binding for port 5cb75e75-7188-4375-a206-eecc6c7b7ba1 and host compute-0.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Oct 13 12:13:41 np0005485008 nova_compute[192512]: 2025-10-13 16:13:41.069 2 DEBUG nova.compute.manager [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "address": "fa:16:3e:9c:46:33", "network": {"id": "2da7d8dc-53a9-4df2-a3fc-227bd054dd7d", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-122318613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f23dde321dee4009a8bb63ceb5de355e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cb75e75-71", "ovs_interfaceid": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Oct 13 12:13:41 np0005485008 nova_compute[192512]: 2025-10-13 16:13:41.070 2 DEBUG nova.virt.libvirt.vif [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T16:11:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteVmWorkloadBalanceStrategy-server-611168433',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutevmworkloadbalancestrategy-server-611168433',id=26,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T16:11:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='83c033c913624d5dba20fa5669b0e083',ramdisk_id='',reservation_id='r-cuq71qen',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteVmWorkloadBalanceStrategy-1762058278',owner_user_name='tempest-TestExecuteVmWorkloadBalanceStrategy-1762058278-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T16:13:27Z,user_data=None,user_id='522b0644eef543dc8e89708621200039',uuid=2af0747d-d588-49b0-acfd-16748a1e4153,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "address": "fa:16:3e:9c:46:33", "network": {"id": "2da7d8dc-53a9-4df2-a3fc-227bd054dd7d", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-122318613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f23dde321dee4009a8bb63ceb5de355e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cb75e75-71", "ovs_interfaceid": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 12:13:41 np0005485008 nova_compute[192512]: 2025-10-13 16:13:41.070 2 DEBUG nova.network.os_vif_util [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converting VIF {"id": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "address": "fa:16:3e:9c:46:33", "network": {"id": "2da7d8dc-53a9-4df2-a3fc-227bd054dd7d", "bridge": "br-int", "label": "tempest-TestExecuteVmWorkloadBalanceStrategy-122318613-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f23dde321dee4009a8bb63ceb5de355e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cb75e75-71", "ovs_interfaceid": "5cb75e75-7188-4375-a206-eecc6c7b7ba1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:13:41 np0005485008 nova_compute[192512]: 2025-10-13 16:13:41.071 2 DEBUG nova.network.os_vif_util [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:46:33,bridge_name='br-int',has_traffic_filtering=True,id=5cb75e75-7188-4375-a206-eecc6c7b7ba1,network=Network(2da7d8dc-53a9-4df2-a3fc-227bd054dd7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cb75e75-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:13:41 np0005485008 nova_compute[192512]: 2025-10-13 16:13:41.071 2 DEBUG os_vif [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:46:33,bridge_name='br-int',has_traffic_filtering=True,id=5cb75e75-7188-4375-a206-eecc6c7b7ba1,network=Network(2da7d8dc-53a9-4df2-a3fc-227bd054dd7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cb75e75-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 12:13:41 np0005485008 nova_compute[192512]: 2025-10-13 16:13:41.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:41 np0005485008 nova_compute[192512]: 2025-10-13 16:13:41.073 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cb75e75-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:13:41 np0005485008 nova_compute[192512]: 2025-10-13 16:13:41.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:41 np0005485008 nova_compute[192512]: 2025-10-13 16:13:41.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:13:41 np0005485008 nova_compute[192512]: 2025-10-13 16:13:41.079 2 INFO os_vif [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:46:33,bridge_name='br-int',has_traffic_filtering=True,id=5cb75e75-7188-4375-a206-eecc6c7b7ba1,network=Network(2da7d8dc-53a9-4df2-a3fc-227bd054dd7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cb75e75-71')#033[00m
Oct 13 12:13:41 np0005485008 nova_compute[192512]: 2025-10-13 16:13:41.080 2 DEBUG oslo_concurrency.lockutils [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:13:41 np0005485008 nova_compute[192512]: 2025-10-13 16:13:41.081 2 DEBUG oslo_concurrency.lockutils [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:13:41 np0005485008 nova_compute[192512]: 2025-10-13 16:13:41.081 2 DEBUG oslo_concurrency.lockutils [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:13:41 np0005485008 nova_compute[192512]: 2025-10-13 16:13:41.081 2 DEBUG nova.compute.manager [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Oct 13 12:13:41 np0005485008 nova_compute[192512]: 2025-10-13 16:13:41.082 2 INFO nova.virt.libvirt.driver [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Deleting instance files /var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153_del#033[00m
Oct 13 12:13:41 np0005485008 nova_compute[192512]: 2025-10-13 16:13:41.083 2 INFO nova.virt.libvirt.driver [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Deletion of /var/lib/nova/instances/2af0747d-d588-49b0-acfd-16748a1e4153_del complete#033[00m
Oct 13 12:13:42 np0005485008 nova_compute[192512]: 2025-10-13 16:13:42.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:42 np0005485008 nova_compute[192512]: 2025-10-13 16:13:42.324 2 DEBUG nova.compute.manager [req-61978b41-afef-479b-9103-15456e4f37a8 req-685dd9d8-4fa4-4cba-95d6-e46ed6820aac 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Received event network-vif-plugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:13:42 np0005485008 nova_compute[192512]: 2025-10-13 16:13:42.325 2 DEBUG oslo_concurrency.lockutils [req-61978b41-afef-479b-9103-15456e4f37a8 req-685dd9d8-4fa4-4cba-95d6-e46ed6820aac 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:13:42 np0005485008 nova_compute[192512]: 2025-10-13 16:13:42.325 2 DEBUG oslo_concurrency.lockutils [req-61978b41-afef-479b-9103-15456e4f37a8 req-685dd9d8-4fa4-4cba-95d6-e46ed6820aac 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:13:42 np0005485008 nova_compute[192512]: 2025-10-13 16:13:42.326 2 DEBUG oslo_concurrency.lockutils [req-61978b41-afef-479b-9103-15456e4f37a8 req-685dd9d8-4fa4-4cba-95d6-e46ed6820aac 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:13:42 np0005485008 nova_compute[192512]: 2025-10-13 16:13:42.326 2 DEBUG nova.compute.manager [req-61978b41-afef-479b-9103-15456e4f37a8 req-685dd9d8-4fa4-4cba-95d6-e46ed6820aac 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] No waiting events found dispatching network-vif-plugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:13:42 np0005485008 nova_compute[192512]: 2025-10-13 16:13:42.326 2 WARNING nova.compute.manager [req-61978b41-afef-479b-9103-15456e4f37a8 req-685dd9d8-4fa4-4cba-95d6-e46ed6820aac 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Received unexpected event network-vif-plugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 for instance with vm_state active and task_state migrating.#033[00m
Oct 13 12:13:42 np0005485008 nova_compute[192512]: 2025-10-13 16:13:42.327 2 DEBUG nova.compute.manager [req-61978b41-afef-479b-9103-15456e4f37a8 req-685dd9d8-4fa4-4cba-95d6-e46ed6820aac 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Received event network-vif-plugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:13:42 np0005485008 nova_compute[192512]: 2025-10-13 16:13:42.327 2 DEBUG oslo_concurrency.lockutils [req-61978b41-afef-479b-9103-15456e4f37a8 req-685dd9d8-4fa4-4cba-95d6-e46ed6820aac 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:13:42 np0005485008 nova_compute[192512]: 2025-10-13 16:13:42.328 2 DEBUG oslo_concurrency.lockutils [req-61978b41-afef-479b-9103-15456e4f37a8 req-685dd9d8-4fa4-4cba-95d6-e46ed6820aac 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:13:42 np0005485008 nova_compute[192512]: 2025-10-13 16:13:42.329 2 DEBUG oslo_concurrency.lockutils [req-61978b41-afef-479b-9103-15456e4f37a8 req-685dd9d8-4fa4-4cba-95d6-e46ed6820aac 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:13:42 np0005485008 nova_compute[192512]: 2025-10-13 16:13:42.329 2 DEBUG nova.compute.manager [req-61978b41-afef-479b-9103-15456e4f37a8 req-685dd9d8-4fa4-4cba-95d6-e46ed6820aac 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] No waiting events found dispatching network-vif-plugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:13:42 np0005485008 nova_compute[192512]: 2025-10-13 16:13:42.329 2 WARNING nova.compute.manager [req-61978b41-afef-479b-9103-15456e4f37a8 req-685dd9d8-4fa4-4cba-95d6-e46ed6820aac 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Received unexpected event network-vif-plugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 for instance with vm_state active and task_state migrating.#033[00m
Oct 13 12:13:42 np0005485008 nova_compute[192512]: 2025-10-13 16:13:42.329 2 DEBUG nova.compute.manager [req-61978b41-afef-479b-9103-15456e4f37a8 req-685dd9d8-4fa4-4cba-95d6-e46ed6820aac 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Received event network-vif-plugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:13:42 np0005485008 nova_compute[192512]: 2025-10-13 16:13:42.330 2 DEBUG oslo_concurrency.lockutils [req-61978b41-afef-479b-9103-15456e4f37a8 req-685dd9d8-4fa4-4cba-95d6-e46ed6820aac 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:13:42 np0005485008 nova_compute[192512]: 2025-10-13 16:13:42.330 2 DEBUG oslo_concurrency.lockutils [req-61978b41-afef-479b-9103-15456e4f37a8 req-685dd9d8-4fa4-4cba-95d6-e46ed6820aac 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:13:42 np0005485008 nova_compute[192512]: 2025-10-13 16:13:42.331 2 DEBUG oslo_concurrency.lockutils [req-61978b41-afef-479b-9103-15456e4f37a8 req-685dd9d8-4fa4-4cba-95d6-e46ed6820aac 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:13:42 np0005485008 nova_compute[192512]: 2025-10-13 16:13:42.331 2 DEBUG nova.compute.manager [req-61978b41-afef-479b-9103-15456e4f37a8 req-685dd9d8-4fa4-4cba-95d6-e46ed6820aac 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] No waiting events found dispatching network-vif-plugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:13:42 np0005485008 nova_compute[192512]: 2025-10-13 16:13:42.331 2 WARNING nova.compute.manager [req-61978b41-afef-479b-9103-15456e4f37a8 req-685dd9d8-4fa4-4cba-95d6-e46ed6820aac 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Received unexpected event network-vif-plugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 for instance with vm_state active and task_state migrating.#033[00m
Oct 13 12:13:42 np0005485008 nova_compute[192512]: 2025-10-13 16:13:42.332 2 DEBUG nova.compute.manager [req-61978b41-afef-479b-9103-15456e4f37a8 req-685dd9d8-4fa4-4cba-95d6-e46ed6820aac 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Received event network-vif-plugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:13:42 np0005485008 nova_compute[192512]: 2025-10-13 16:13:42.332 2 DEBUG oslo_concurrency.lockutils [req-61978b41-afef-479b-9103-15456e4f37a8 req-685dd9d8-4fa4-4cba-95d6-e46ed6820aac 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:13:42 np0005485008 nova_compute[192512]: 2025-10-13 16:13:42.332 2 DEBUG oslo_concurrency.lockutils [req-61978b41-afef-479b-9103-15456e4f37a8 req-685dd9d8-4fa4-4cba-95d6-e46ed6820aac 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:13:42 np0005485008 nova_compute[192512]: 2025-10-13 16:13:42.333 2 DEBUG oslo_concurrency.lockutils [req-61978b41-afef-479b-9103-15456e4f37a8 req-685dd9d8-4fa4-4cba-95d6-e46ed6820aac 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:13:42 np0005485008 nova_compute[192512]: 2025-10-13 16:13:42.333 2 DEBUG nova.compute.manager [req-61978b41-afef-479b-9103-15456e4f37a8 req-685dd9d8-4fa4-4cba-95d6-e46ed6820aac 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] No waiting events found dispatching network-vif-plugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:13:42 np0005485008 nova_compute[192512]: 2025-10-13 16:13:42.333 2 WARNING nova.compute.manager [req-61978b41-afef-479b-9103-15456e4f37a8 req-685dd9d8-4fa4-4cba-95d6-e46ed6820aac 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Received unexpected event network-vif-plugged-5cb75e75-7188-4375-a206-eecc6c7b7ba1 for instance with vm_state active and task_state migrating.#033[00m
Oct 13 12:13:43 np0005485008 systemd[1]: Stopping User Manager for UID 42436...
Oct 13 12:13:43 np0005485008 systemd[226525]: Activating special unit Exit the Session...
Oct 13 12:13:43 np0005485008 systemd[226525]: Stopped target Main User Target.
Oct 13 12:13:43 np0005485008 systemd[226525]: Stopped target Basic System.
Oct 13 12:13:43 np0005485008 systemd[226525]: Stopped target Paths.
Oct 13 12:13:43 np0005485008 systemd[226525]: Stopped target Sockets.
Oct 13 12:13:43 np0005485008 systemd[226525]: Stopped target Timers.
Oct 13 12:13:43 np0005485008 systemd[226525]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 13 12:13:43 np0005485008 systemd[226525]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 13 12:13:43 np0005485008 systemd[226525]: Closed D-Bus User Message Bus Socket.
Oct 13 12:13:43 np0005485008 systemd[226525]: Stopped Create User's Volatile Files and Directories.
Oct 13 12:13:43 np0005485008 systemd[226525]: Removed slice User Application Slice.
Oct 13 12:13:43 np0005485008 systemd[226525]: Reached target Shutdown.
Oct 13 12:13:43 np0005485008 systemd[226525]: Finished Exit the Session.
Oct 13 12:13:43 np0005485008 systemd[226525]: Reached target Exit the Session.
Oct 13 12:13:43 np0005485008 systemd[1]: user@42436.service: Deactivated successfully.
Oct 13 12:13:43 np0005485008 systemd[1]: Stopped User Manager for UID 42436.
Oct 13 12:13:43 np0005485008 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct 13 12:13:43 np0005485008 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct 13 12:13:43 np0005485008 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct 13 12:13:43 np0005485008 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct 13 12:13:43 np0005485008 systemd[1]: Removed slice User Slice of UID 42436.
Oct 13 12:13:45 np0005485008 nova_compute[192512]: 2025-10-13 16:13:45.737 2 DEBUG oslo_concurrency.lockutils [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:13:45 np0005485008 nova_compute[192512]: 2025-10-13 16:13:45.738 2 DEBUG oslo_concurrency.lockutils [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:13:45 np0005485008 nova_compute[192512]: 2025-10-13 16:13:45.738 2 DEBUG oslo_concurrency.lockutils [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "2af0747d-d588-49b0-acfd-16748a1e4153-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:13:45 np0005485008 nova_compute[192512]: 2025-10-13 16:13:45.763 2 DEBUG oslo_concurrency.lockutils [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:13:45 np0005485008 nova_compute[192512]: 2025-10-13 16:13:45.764 2 DEBUG oslo_concurrency.lockutils [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:13:45 np0005485008 nova_compute[192512]: 2025-10-13 16:13:45.764 2 DEBUG oslo_concurrency.lockutils [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:13:45 np0005485008 nova_compute[192512]: 2025-10-13 16:13:45.764 2 DEBUG nova.compute.resource_tracker [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:13:45 np0005485008 nova_compute[192512]: 2025-10-13 16:13:45.970 2 WARNING nova.virt.libvirt.driver [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:13:45 np0005485008 nova_compute[192512]: 2025-10-13 16:13:45.971 2 DEBUG nova.compute.resource_tracker [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5866MB free_disk=73.46315383911133GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:13:45 np0005485008 nova_compute[192512]: 2025-10-13 16:13:45.971 2 DEBUG oslo_concurrency.lockutils [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:13:45 np0005485008 nova_compute[192512]: 2025-10-13 16:13:45.972 2 DEBUG oslo_concurrency.lockutils [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:13:46 np0005485008 nova_compute[192512]: 2025-10-13 16:13:46.046 2 DEBUG nova.compute.resource_tracker [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Migration for instance 2af0747d-d588-49b0-acfd-16748a1e4153 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Oct 13 12:13:46 np0005485008 nova_compute[192512]: 2025-10-13 16:13:46.076 2 DEBUG nova.compute.resource_tracker [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Oct 13 12:13:46 np0005485008 nova_compute[192512]: 2025-10-13 16:13:46.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:46 np0005485008 nova_compute[192512]: 2025-10-13 16:13:46.104 2 DEBUG nova.compute.resource_tracker [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Migration 4faf9de3-ed97-4919-9286-6150142cb79d is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Oct 13 12:13:46 np0005485008 nova_compute[192512]: 2025-10-13 16:13:46.105 2 DEBUG nova.compute.resource_tracker [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:13:46 np0005485008 nova_compute[192512]: 2025-10-13 16:13:46.105 2 DEBUG nova.compute.resource_tracker [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:13:46 np0005485008 nova_compute[192512]: 2025-10-13 16:13:46.139 2 DEBUG nova.compute.provider_tree [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:13:46 np0005485008 nova_compute[192512]: 2025-10-13 16:13:46.159 2 DEBUG nova.scheduler.client.report [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:13:46 np0005485008 nova_compute[192512]: 2025-10-13 16:13:46.188 2 DEBUG nova.compute.resource_tracker [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:13:46 np0005485008 nova_compute[192512]: 2025-10-13 16:13:46.188 2 DEBUG oslo_concurrency.lockutils [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:13:46 np0005485008 nova_compute[192512]: 2025-10-13 16:13:46.194 2 INFO nova.compute.manager [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Migrating instance to compute-0.ctlplane.example.com finished successfully.#033[00m
Oct 13 12:13:46 np0005485008 nova_compute[192512]: 2025-10-13 16:13:46.277 2 INFO nova.scheduler.client.report [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Deleted allocation for migration 4faf9de3-ed97-4919-9286-6150142cb79d#033[00m
Oct 13 12:13:46 np0005485008 nova_compute[192512]: 2025-10-13 16:13:46.277 2 DEBUG nova.virt.libvirt.driver [None req-0be6db88-88e7-49db-9a84-3be0780c2522 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Oct 13 12:13:46 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:13:46.628 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:13:47 np0005485008 nova_compute[192512]: 2025-10-13 16:13:47.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:49 np0005485008 nova_compute[192512]: 2025-10-13 16:13:49.274 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:13:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:13:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:13:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:13:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:13:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:13:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:13:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:13:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:13:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:13:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:13:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:13:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:13:51 np0005485008 nova_compute[192512]: 2025-10-13 16:13:51.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:51 np0005485008 podman[226681]: 2025-10-13 16:13:51.766395593 +0000 UTC m=+0.060573778 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 12:13:51 np0005485008 podman[226679]: 2025-10-13 16:13:51.771876285 +0000 UTC m=+0.073854124 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 12:13:51 np0005485008 podman[226680]: 2025-10-13 16:13:51.79663506 +0000 UTC m=+0.097508605 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 12:13:51 np0005485008 podman[226682]: 2025-10-13 16:13:51.879484197 +0000 UTC m=+0.170897306 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 12:13:51 np0005485008 podman[226693]: 2025-10-13 16:13:51.919640064 +0000 UTC m=+0.204332293 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 13 12:13:52 np0005485008 nova_compute[192512]: 2025-10-13 16:13:52.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:55 np0005485008 nova_compute[192512]: 2025-10-13 16:13:55.250 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760372020.2468398, 2af0747d-d588-49b0-acfd-16748a1e4153 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:13:55 np0005485008 nova_compute[192512]: 2025-10-13 16:13:55.250 2 INFO nova.compute.manager [-] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] VM Stopped (Lifecycle Event)#033[00m
Oct 13 12:13:55 np0005485008 nova_compute[192512]: 2025-10-13 16:13:55.269 2 DEBUG nova.compute.manager [None req-4b62c4c7-394b-4569-8ba1-383b65260ce0 - - - - - -] [instance: 2af0747d-d588-49b0-acfd-16748a1e4153] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:13:56 np0005485008 nova_compute[192512]: 2025-10-13 16:13:56.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:13:57 np0005485008 nova_compute[192512]: 2025-10-13 16:13:57.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:14:01 np0005485008 nova_compute[192512]: 2025-10-13 16:14:01.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:14:02 np0005485008 nova_compute[192512]: 2025-10-13 16:14:02.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:14:05 np0005485008 podman[202884]: time="2025-10-13T16:14:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:14:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:14:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:14:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:14:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3010 "" "Go-http-client/1.1"
Oct 13 12:14:05 np0005485008 podman[226778]: 2025-10-13 16:14:05.747922832 +0000 UTC m=+0.056043326 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, distribution-scope=public, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Oct 13 12:14:06 np0005485008 nova_compute[192512]: 2025-10-13 16:14:06.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:14:07 np0005485008 nova_compute[192512]: 2025-10-13 16:14:07.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:14:11 np0005485008 nova_compute[192512]: 2025-10-13 16:14:11.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:14:12 np0005485008 nova_compute[192512]: 2025-10-13 16:14:12.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:14:16 np0005485008 nova_compute[192512]: 2025-10-13 16:14:16.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:14:17 np0005485008 nova_compute[192512]: 2025-10-13 16:14:17.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:14:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:14:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:14:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:14:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:14:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:14:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:14:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:14:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:14:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:14:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:14:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:14:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:14:21 np0005485008 nova_compute[192512]: 2025-10-13 16:14:21.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:14:22 np0005485008 nova_compute[192512]: 2025-10-13 16:14:22.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:14:22 np0005485008 podman[226801]: 2025-10-13 16:14:22.760878381 +0000 UTC m=+0.052947550 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 13 12:14:22 np0005485008 podman[226799]: 2025-10-13 16:14:22.769660486 +0000 UTC m=+0.073364850 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 12:14:22 np0005485008 podman[226800]: 2025-10-13 16:14:22.775355694 +0000 UTC m=+0.072592905 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251009)
Oct 13 12:14:22 np0005485008 podman[226802]: 2025-10-13 16:14:22.782383454 +0000 UTC m=+0.074314619 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 12:14:22 np0005485008 podman[226808]: 2025-10-13 16:14:22.811410684 +0000 UTC m=+0.097757184 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 12:14:26 np0005485008 nova_compute[192512]: 2025-10-13 16:14:26.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:14:27 np0005485008 ovn_controller[94758]: 2025-10-13T16:14:27Z|00285|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Oct 13 12:14:27 np0005485008 nova_compute[192512]: 2025-10-13 16:14:27.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:14:27 np0005485008 nova_compute[192512]: 2025-10-13 16:14:27.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:14:27 np0005485008 nova_compute[192512]: 2025-10-13 16:14:27.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:14:27 np0005485008 nova_compute[192512]: 2025-10-13 16:14:27.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:14:29 np0005485008 nova_compute[192512]: 2025-10-13 16:14:29.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:14:31 np0005485008 nova_compute[192512]: 2025-10-13 16:14:31.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:14:31 np0005485008 nova_compute[192512]: 2025-10-13 16:14:31.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:14:32 np0005485008 nova_compute[192512]: 2025-10-13 16:14:32.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:14:32 np0005485008 nova_compute[192512]: 2025-10-13 16:14:32.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:14:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:14:33.981 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:14:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:14:33.982 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:14:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:14:33.982 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:14:34 np0005485008 nova_compute[192512]: 2025-10-13 16:14:34.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:14:34 np0005485008 nova_compute[192512]: 2025-10-13 16:14:34.539 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:14:34 np0005485008 nova_compute[192512]: 2025-10-13 16:14:34.539 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:14:34 np0005485008 nova_compute[192512]: 2025-10-13 16:14:34.540 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:14:34 np0005485008 nova_compute[192512]: 2025-10-13 16:14:34.540 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:14:34 np0005485008 nova_compute[192512]: 2025-10-13 16:14:34.737 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:14:34 np0005485008 nova_compute[192512]: 2025-10-13 16:14:34.739 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5858MB free_disk=73.46315383911133GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:14:34 np0005485008 nova_compute[192512]: 2025-10-13 16:14:34.739 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:14:34 np0005485008 nova_compute[192512]: 2025-10-13 16:14:34.739 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:14:35 np0005485008 nova_compute[192512]: 2025-10-13 16:14:35.033 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:14:35 np0005485008 nova_compute[192512]: 2025-10-13 16:14:35.033 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:14:35 np0005485008 nova_compute[192512]: 2025-10-13 16:14:35.055 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:14:35 np0005485008 nova_compute[192512]: 2025-10-13 16:14:35.171 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:14:35 np0005485008 nova_compute[192512]: 2025-10-13 16:14:35.173 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:14:35 np0005485008 nova_compute[192512]: 2025-10-13 16:14:35.173 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:14:35 np0005485008 podman[202884]: time="2025-10-13T16:14:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:14:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:14:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:14:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:14:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3011 "" "Go-http-client/1.1"
Oct 13 12:14:36 np0005485008 nova_compute[192512]: 2025-10-13 16:14:36.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:14:36 np0005485008 nova_compute[192512]: 2025-10-13 16:14:36.172 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:14:36 np0005485008 nova_compute[192512]: 2025-10-13 16:14:36.173 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:14:36 np0005485008 nova_compute[192512]: 2025-10-13 16:14:36.173 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:14:36 np0005485008 nova_compute[192512]: 2025-10-13 16:14:36.213 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:14:36 np0005485008 nova_compute[192512]: 2025-10-13 16:14:36.215 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:14:36 np0005485008 nova_compute[192512]: 2025-10-13 16:14:36.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:14:36 np0005485008 podman[226905]: 2025-10-13 16:14:36.761480547 +0000 UTC m=+0.061581651 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, vcs-type=git, container_name=openstack_network_exporter, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, version=9.6, maintainer=Red Hat, Inc., io.buildah.version=1.33.7)
Oct 13 12:14:37 np0005485008 nova_compute[192512]: 2025-10-13 16:14:37.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:14:41 np0005485008 nova_compute[192512]: 2025-10-13 16:14:41.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:14:42 np0005485008 nova_compute[192512]: 2025-10-13 16:14:42.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:14:46 np0005485008 nova_compute[192512]: 2025-10-13 16:14:46.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:14:47 np0005485008 nova_compute[192512]: 2025-10-13 16:14:47.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:14:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:14:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:14:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:14:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:14:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:14:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:14:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:14:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:14:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:14:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:14:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:14:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:14:49 np0005485008 rsyslogd[1000]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 12:14:49 np0005485008 rsyslogd[1000]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 13 12:14:51 np0005485008 nova_compute[192512]: 2025-10-13 16:14:51.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:14:52 np0005485008 nova_compute[192512]: 2025-10-13 16:14:52.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:14:53 np0005485008 podman[226931]: 2025-10-13 16:14:53.778673429 +0000 UTC m=+0.068521069 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 12:14:53 np0005485008 podman[226929]: 2025-10-13 16:14:53.783735697 +0000 UTC m=+0.081322719 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 12:14:53 np0005485008 podman[226930]: 2025-10-13 16:14:53.800194983 +0000 UTC m=+0.095294577 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 12:14:53 np0005485008 podman[226928]: 2025-10-13 16:14:53.800332377 +0000 UTC m=+0.102341608 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 13 12:14:53 np0005485008 podman[226937]: 2025-10-13 16:14:53.838302066 +0000 UTC m=+0.116217381 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 13 12:14:54 np0005485008 nova_compute[192512]: 2025-10-13 16:14:54.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:14:54 np0005485008 nova_compute[192512]: 2025-10-13 16:14:54.428 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:14:54 np0005485008 nova_compute[192512]: 2025-10-13 16:14:54.429 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:14:54 np0005485008 nova_compute[192512]: 2025-10-13 16:14:54.430 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:14:54 np0005485008 nova_compute[192512]: 2025-10-13 16:14:54.430 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:14:54 np0005485008 nova_compute[192512]: 2025-10-13 16:14:54.430 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:14:54 np0005485008 nova_compute[192512]: 2025-10-13 16:14:54.431 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:14:54 np0005485008 nova_compute[192512]: 2025-10-13 16:14:54.714 2 DEBUG nova.virt.libvirt.imagecache [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Oct 13 12:14:54 np0005485008 nova_compute[192512]: 2025-10-13 16:14:54.714 2 WARNING nova.virt.libvirt.imagecache [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7#033[00m
Oct 13 12:14:54 np0005485008 nova_compute[192512]: 2025-10-13 16:14:54.714 2 INFO nova.virt.libvirt.imagecache [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Removable base files: /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7#033[00m
Oct 13 12:14:54 np0005485008 nova_compute[192512]: 2025-10-13 16:14:54.714 2 INFO nova.virt.libvirt.imagecache [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7#033[00m
Oct 13 12:14:54 np0005485008 nova_compute[192512]: 2025-10-13 16:14:54.715 2 DEBUG nova.virt.libvirt.imagecache [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Oct 13 12:14:54 np0005485008 nova_compute[192512]: 2025-10-13 16:14:54.715 2 DEBUG nova.virt.libvirt.imagecache [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Oct 13 12:14:54 np0005485008 nova_compute[192512]: 2025-10-13 16:14:54.715 2 DEBUG nova.virt.libvirt.imagecache [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Oct 13 12:14:56 np0005485008 nova_compute[192512]: 2025-10-13 16:14:56.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:14:57 np0005485008 nova_compute[192512]: 2025-10-13 16:14:57.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:01 np0005485008 nova_compute[192512]: 2025-10-13 16:15:01.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:02 np0005485008 nova_compute[192512]: 2025-10-13 16:15:02.312 2 DEBUG nova.compute.manager [None req-32ba2eb6-244e-4625-ab73-ec30753c9e31 865607264bba43aa9610d9440c89e920 d93a2ce330a244f186b39e1ea3fc96a4 - - default default] Removing trait COMPUTE_STATUS_DISABLED from compute node resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce in placement. update_compute_provider_status /usr/lib/python3.9/site-packages/nova/compute/manager.py:606#033[00m
Oct 13 12:15:02 np0005485008 nova_compute[192512]: 2025-10-13 16:15:02.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:02 np0005485008 nova_compute[192512]: 2025-10-13 16:15:02.375 2 DEBUG nova.compute.provider_tree [None req-32ba2eb6-244e-4625-ab73-ec30753c9e31 865607264bba43aa9610d9440c89e920 d93a2ce330a244f186b39e1ea3fc96a4 - - default default] Updating resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce generation from 30 to 32 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct 13 12:15:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:15:03.024 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:15:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:15:03.025 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 12:15:03 np0005485008 nova_compute[192512]: 2025-10-13 16:15:03.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:05 np0005485008 podman[202884]: time="2025-10-13T16:15:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:15:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:15:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:15:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:15:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3004 "" "Go-http-client/1.1"
Oct 13 12:15:06 np0005485008 nova_compute[192512]: 2025-10-13 16:15:06.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:07 np0005485008 nova_compute[192512]: 2025-10-13 16:15:07.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:07 np0005485008 podman[227032]: 2025-10-13 16:15:07.759072531 +0000 UTC m=+0.067886148 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vcs-type=git, release=1755695350, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 13 12:15:09 np0005485008 nova_compute[192512]: 2025-10-13 16:15:09.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:11 np0005485008 nova_compute[192512]: 2025-10-13 16:15:11.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:12 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:15:12.028 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:15:12 np0005485008 nova_compute[192512]: 2025-10-13 16:15:12.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:16 np0005485008 nova_compute[192512]: 2025-10-13 16:15:16.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:17 np0005485008 nova_compute[192512]: 2025-10-13 16:15:17.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:15:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:15:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:15:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:15:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:15:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:15:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:15:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:15:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:15:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:15:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:15:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:15:21 np0005485008 nova_compute[192512]: 2025-10-13 16:15:21.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:22 np0005485008 nova_compute[192512]: 2025-10-13 16:15:22.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:24 np0005485008 podman[227057]: 2025-10-13 16:15:24.793221232 +0000 UTC m=+0.080334438 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 12:15:24 np0005485008 podman[227054]: 2025-10-13 16:15:24.794132931 +0000 UTC m=+0.080431891 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible)
Oct 13 12:15:24 np0005485008 podman[227055]: 2025-10-13 16:15:24.81643817 +0000 UTC m=+0.098407605 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 13 12:15:24 np0005485008 podman[227056]: 2025-10-13 16:15:24.834272289 +0000 UTC m=+0.112182466 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 13 12:15:24 np0005485008 podman[227058]: 2025-10-13 16:15:24.851856319 +0000 UTC m=+0.122763947 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 13 12:15:26 np0005485008 nova_compute[192512]: 2025-10-13 16:15:26.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:27 np0005485008 nova_compute[192512]: 2025-10-13 16:15:27.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:27 np0005485008 nova_compute[192512]: 2025-10-13 16:15:27.715 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:15:27 np0005485008 nova_compute[192512]: 2025-10-13 16:15:27.716 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:15:29 np0005485008 nova_compute[192512]: 2025-10-13 16:15:29.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:15:30 np0005485008 nova_compute[192512]: 2025-10-13 16:15:30.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:15:31 np0005485008 nova_compute[192512]: 2025-10-13 16:15:31.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:32 np0005485008 nova_compute[192512]: 2025-10-13 16:15:32.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:32 np0005485008 nova_compute[192512]: 2025-10-13 16:15:32.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:15:33 np0005485008 nova_compute[192512]: 2025-10-13 16:15:33.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:15:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:15:33.982 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:15:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:15:33.983 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:15:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:15:33.984 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:15:34 np0005485008 nova_compute[192512]: 2025-10-13 16:15:34.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:15:34 np0005485008 nova_compute[192512]: 2025-10-13 16:15:34.465 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:15:34 np0005485008 nova_compute[192512]: 2025-10-13 16:15:34.465 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:15:34 np0005485008 nova_compute[192512]: 2025-10-13 16:15:34.465 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:15:34 np0005485008 nova_compute[192512]: 2025-10-13 16:15:34.466 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:15:34 np0005485008 nova_compute[192512]: 2025-10-13 16:15:34.627 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:15:34 np0005485008 nova_compute[192512]: 2025-10-13 16:15:34.628 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5862MB free_disk=73.46315002441406GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:15:34 np0005485008 nova_compute[192512]: 2025-10-13 16:15:34.628 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:15:34 np0005485008 nova_compute[192512]: 2025-10-13 16:15:34.629 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:15:34 np0005485008 nova_compute[192512]: 2025-10-13 16:15:34.717 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:15:34 np0005485008 nova_compute[192512]: 2025-10-13 16:15:34.717 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:15:34 np0005485008 nova_compute[192512]: 2025-10-13 16:15:34.761 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:15:34 np0005485008 nova_compute[192512]: 2025-10-13 16:15:34.782 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:15:34 np0005485008 nova_compute[192512]: 2025-10-13 16:15:34.785 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:15:34 np0005485008 nova_compute[192512]: 2025-10-13 16:15:34.785 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:15:35 np0005485008 podman[202884]: time="2025-10-13T16:15:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:15:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:15:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:15:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:15:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3015 "" "Go-http-client/1.1"
Oct 13 12:15:35 np0005485008 nova_compute[192512]: 2025-10-13 16:15:35.787 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:15:36 np0005485008 nova_compute[192512]: 2025-10-13 16:15:36.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:36 np0005485008 nova_compute[192512]: 2025-10-13 16:15:36.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:15:36 np0005485008 nova_compute[192512]: 2025-10-13 16:15:36.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:15:36 np0005485008 nova_compute[192512]: 2025-10-13 16:15:36.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:15:36 np0005485008 nova_compute[192512]: 2025-10-13 16:15:36.453 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:15:37 np0005485008 nova_compute[192512]: 2025-10-13 16:15:37.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:38 np0005485008 nova_compute[192512]: 2025-10-13 16:15:38.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:15:38 np0005485008 podman[227160]: 2025-10-13 16:15:38.754875448 +0000 UTC m=+0.061840959 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 12:15:41 np0005485008 nova_compute[192512]: 2025-10-13 16:15:41.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:42 np0005485008 nova_compute[192512]: 2025-10-13 16:15:42.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:45 np0005485008 ovn_controller[94758]: 2025-10-13T16:15:45Z|00286|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Oct 13 12:15:46 np0005485008 nova_compute[192512]: 2025-10-13 16:15:46.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:46 np0005485008 nova_compute[192512]: 2025-10-13 16:15:46.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:15:47 np0005485008 nova_compute[192512]: 2025-10-13 16:15:47.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:48 np0005485008 nova_compute[192512]: 2025-10-13 16:15:48.077 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:15:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:15:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:15:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:15:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:15:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:15:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:15:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:15:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:15:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:15:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:15:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:15:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:15:51 np0005485008 nova_compute[192512]: 2025-10-13 16:15:51.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:51 np0005485008 nova_compute[192512]: 2025-10-13 16:15:51.898 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:15:51 np0005485008 nova_compute[192512]: 2025-10-13 16:15:51.899 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 13 12:15:52 np0005485008 nova_compute[192512]: 2025-10-13 16:15:52.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:54 np0005485008 nova_compute[192512]: 2025-10-13 16:15:54.452 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:15:54 np0005485008 nova_compute[192512]: 2025-10-13 16:15:54.452 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 13 12:15:54 np0005485008 nova_compute[192512]: 2025-10-13 16:15:54.474 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 13 12:15:55 np0005485008 podman[227185]: 2025-10-13 16:15:55.784269771 +0000 UTC m=+0.068166556 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 12:15:55 np0005485008 podman[227187]: 2025-10-13 16:15:55.809341756 +0000 UTC m=+0.080341748 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 13 12:15:55 np0005485008 podman[227188]: 2025-10-13 16:15:55.809526262 +0000 UTC m=+0.082230747 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 12:15:55 np0005485008 podman[227186]: 2025-10-13 16:15:55.80945002 +0000 UTC m=+0.086677747 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, managed_by=edpm_ansible, container_name=iscsid)
Oct 13 12:15:55 np0005485008 podman[227194]: 2025-10-13 16:15:55.853554932 +0000 UTC m=+0.118349629 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009)
Oct 13 12:15:56 np0005485008 nova_compute[192512]: 2025-10-13 16:15:56.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:15:57 np0005485008 nova_compute[192512]: 2025-10-13 16:15:57.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:01 np0005485008 nova_compute[192512]: 2025-10-13 16:16:01.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:02 np0005485008 nova_compute[192512]: 2025-10-13 16:16:02.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:03 np0005485008 nova_compute[192512]: 2025-10-13 16:16:03.768 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:16:05 np0005485008 podman[202884]: time="2025-10-13T16:16:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:16:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:16:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:16:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:16:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3017 "" "Go-http-client/1.1"
Oct 13 12:16:06 np0005485008 nova_compute[192512]: 2025-10-13 16:16:06.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:06 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:16:06.682 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:16:06 np0005485008 nova_compute[192512]: 2025-10-13 16:16:06.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:06 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:16:06.683 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 12:16:06 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:16:06.684 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:16:07 np0005485008 nova_compute[192512]: 2025-10-13 16:16:07.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:09 np0005485008 podman[227287]: 2025-10-13 16:16:09.772034314 +0000 UTC m=+0.074817004 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, vcs-type=git, maintainer=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Oct 13 12:16:11 np0005485008 nova_compute[192512]: 2025-10-13 16:16:11.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:12 np0005485008 nova_compute[192512]: 2025-10-13 16:16:12.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:16 np0005485008 nova_compute[192512]: 2025-10-13 16:16:16.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:17 np0005485008 nova_compute[192512]: 2025-10-13 16:16:17.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:16:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:16:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:16:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:16:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:16:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:16:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:16:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:16:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:16:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:16:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:16:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:16:21 np0005485008 nova_compute[192512]: 2025-10-13 16:16:21.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:22 np0005485008 nova_compute[192512]: 2025-10-13 16:16:22.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:26 np0005485008 nova_compute[192512]: 2025-10-13 16:16:26.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:26 np0005485008 podman[227308]: 2025-10-13 16:16:26.765612196 +0000 UTC m=+0.065491643 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 13 12:16:26 np0005485008 podman[227310]: 2025-10-13 16:16:26.787402698 +0000 UTC m=+0.077770167 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 12:16:26 np0005485008 podman[227311]: 2025-10-13 16:16:26.793283073 +0000 UTC m=+0.084999495 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 12:16:26 np0005485008 podman[227312]: 2025-10-13 16:16:26.82322457 +0000 UTC m=+0.107863469 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 12:16:26 np0005485008 podman[227309]: 2025-10-13 16:16:26.829821017 +0000 UTC m=+0.119836745 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 13 12:16:27 np0005485008 nova_compute[192512]: 2025-10-13 16:16:27.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:28 np0005485008 nova_compute[192512]: 2025-10-13 16:16:28.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:16:28 np0005485008 nova_compute[192512]: 2025-10-13 16:16:28.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:16:31 np0005485008 nova_compute[192512]: 2025-10-13 16:16:31.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:31 np0005485008 nova_compute[192512]: 2025-10-13 16:16:31.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:16:32 np0005485008 nova_compute[192512]: 2025-10-13 16:16:32.422 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:16:32 np0005485008 nova_compute[192512]: 2025-10-13 16:16:32.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:33 np0005485008 nova_compute[192512]: 2025-10-13 16:16:33.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:16:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:16:33.982 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:16:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:16:33.983 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:16:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:16:33.983 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:16:34 np0005485008 nova_compute[192512]: 2025-10-13 16:16:34.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:16:35 np0005485008 nova_compute[192512]: 2025-10-13 16:16:35.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:16:35 np0005485008 nova_compute[192512]: 2025-10-13 16:16:35.457 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:16:35 np0005485008 nova_compute[192512]: 2025-10-13 16:16:35.458 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:16:35 np0005485008 nova_compute[192512]: 2025-10-13 16:16:35.458 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:16:35 np0005485008 nova_compute[192512]: 2025-10-13 16:16:35.458 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:16:35 np0005485008 podman[202884]: time="2025-10-13T16:16:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:16:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:16:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:16:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:16:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3012 "" "Go-http-client/1.1"
Oct 13 12:16:35 np0005485008 nova_compute[192512]: 2025-10-13 16:16:35.665 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:16:35 np0005485008 nova_compute[192512]: 2025-10-13 16:16:35.667 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5859MB free_disk=73.46315002441406GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:16:35 np0005485008 nova_compute[192512]: 2025-10-13 16:16:35.667 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:16:35 np0005485008 nova_compute[192512]: 2025-10-13 16:16:35.667 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:16:35 np0005485008 nova_compute[192512]: 2025-10-13 16:16:35.771 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:16:35 np0005485008 nova_compute[192512]: 2025-10-13 16:16:35.771 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:16:35 np0005485008 nova_compute[192512]: 2025-10-13 16:16:35.834 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:16:35 np0005485008 nova_compute[192512]: 2025-10-13 16:16:35.850 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:16:35 np0005485008 nova_compute[192512]: 2025-10-13 16:16:35.852 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:16:35 np0005485008 nova_compute[192512]: 2025-10-13 16:16:35.853 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:16:36 np0005485008 nova_compute[192512]: 2025-10-13 16:16:36.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:37 np0005485008 nova_compute[192512]: 2025-10-13 16:16:37.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:37 np0005485008 nova_compute[192512]: 2025-10-13 16:16:37.854 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:16:37 np0005485008 nova_compute[192512]: 2025-10-13 16:16:37.855 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:16:37 np0005485008 nova_compute[192512]: 2025-10-13 16:16:37.855 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:16:37 np0005485008 nova_compute[192512]: 2025-10-13 16:16:37.937 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:16:37 np0005485008 nova_compute[192512]: 2025-10-13 16:16:37.938 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:16:38 np0005485008 nova_compute[192512]: 2025-10-13 16:16:38.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:16:40 np0005485008 podman[227411]: 2025-10-13 16:16:40.788963412 +0000 UTC m=+0.082583284 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, distribution-scope=public, release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 13 12:16:41 np0005485008 nova_compute[192512]: 2025-10-13 16:16:41.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:42 np0005485008 nova_compute[192512]: 2025-10-13 16:16:42.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:46 np0005485008 nova_compute[192512]: 2025-10-13 16:16:46.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:47 np0005485008 nova_compute[192512]: 2025-10-13 16:16:47.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:16:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:16:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:16:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:16:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:16:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:16:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:16:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:16:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:16:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:16:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:16:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:16:51 np0005485008 nova_compute[192512]: 2025-10-13 16:16:51.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:52 np0005485008 nova_compute[192512]: 2025-10-13 16:16:52.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:56 np0005485008 nova_compute[192512]: 2025-10-13 16:16:56.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:57 np0005485008 nova_compute[192512]: 2025-10-13 16:16:57.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:16:57 np0005485008 podman[227437]: 2025-10-13 16:16:57.772792581 +0000 UTC m=+0.060290779 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 12:16:57 np0005485008 podman[227434]: 2025-10-13 16:16:57.793371952 +0000 UTC m=+0.090299934 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true)
Oct 13 12:16:57 np0005485008 podman[227436]: 2025-10-13 16:16:57.794512827 +0000 UTC m=+0.087993001 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 12:16:57 np0005485008 podman[227435]: 2025-10-13 16:16:57.80743888 +0000 UTC m=+0.097257690 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 12:16:57 np0005485008 podman[227442]: 2025-10-13 16:16:57.807810882 +0000 UTC m=+0.091969576 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 13 12:17:01 np0005485008 nova_compute[192512]: 2025-10-13 16:17:01.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:02 np0005485008 nova_compute[192512]: 2025-10-13 16:17:02.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:05 np0005485008 podman[202884]: time="2025-10-13T16:17:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:17:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:17:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:17:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:17:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3014 "" "Go-http-client/1.1"
Oct 13 12:17:06 np0005485008 nova_compute[192512]: 2025-10-13 16:17:06.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:07 np0005485008 nova_compute[192512]: 2025-10-13 16:17:07.206 2 DEBUG nova.virt.libvirt.driver [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Creating tmpfile /var/lib/nova/instances/tmpdqikxstc to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Oct 13 12:17:07 np0005485008 nova_compute[192512]: 2025-10-13 16:17:07.207 2 DEBUG nova.compute.manager [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpdqikxstc',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Oct 13 12:17:07 np0005485008 nova_compute[192512]: 2025-10-13 16:17:07.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:08 np0005485008 nova_compute[192512]: 2025-10-13 16:17:08.448 2 DEBUG nova.compute.manager [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpdqikxstc',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='4c9ea3ef-43ec-461d-8a85-cc05b1330ae2',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Oct 13 12:17:08 np0005485008 nova_compute[192512]: 2025-10-13 16:17:08.508 2 DEBUG oslo_concurrency.lockutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-4c9ea3ef-43ec-461d-8a85-cc05b1330ae2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:17:08 np0005485008 nova_compute[192512]: 2025-10-13 16:17:08.508 2 DEBUG oslo_concurrency.lockutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-4c9ea3ef-43ec-461d-8a85-cc05b1330ae2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:17:08 np0005485008 nova_compute[192512]: 2025-10-13 16:17:08.508 2 DEBUG nova.network.neutron [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 12:17:11 np0005485008 nova_compute[192512]: 2025-10-13 16:17:11.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:11 np0005485008 nova_compute[192512]: 2025-10-13 16:17:11.739 2 DEBUG nova.network.neutron [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Updating instance_info_cache with network_info: [{"id": "a030d143-cbb5-404a-927a-dea2b3268fed", "address": "fa:16:3e:c9:16:92", "network": {"id": "7719b000-5e4d-4d3a-b708-5359f703a47f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-1375252188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f5441da6d849ffb52473a57a863f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa030d143-cb", "ovs_interfaceid": "a030d143-cbb5-404a-927a-dea2b3268fed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:17:11 np0005485008 nova_compute[192512]: 2025-10-13 16:17:11.763 2 DEBUG oslo_concurrency.lockutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-4c9ea3ef-43ec-461d-8a85-cc05b1330ae2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:17:11 np0005485008 nova_compute[192512]: 2025-10-13 16:17:11.766 2 DEBUG nova.virt.libvirt.driver [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpdqikxstc',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='4c9ea3ef-43ec-461d-8a85-cc05b1330ae2',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Oct 13 12:17:11 np0005485008 nova_compute[192512]: 2025-10-13 16:17:11.768 2 DEBUG nova.virt.libvirt.driver [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Creating instance directory: /var/lib/nova/instances/4c9ea3ef-43ec-461d-8a85-cc05b1330ae2 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Oct 13 12:17:11 np0005485008 nova_compute[192512]: 2025-10-13 16:17:11.768 2 DEBUG nova.virt.libvirt.driver [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Creating disk.info with the contents: {'/var/lib/nova/instances/4c9ea3ef-43ec-461d-8a85-cc05b1330ae2/disk': 'qcow2', '/var/lib/nova/instances/4c9ea3ef-43ec-461d-8a85-cc05b1330ae2/disk.config': 'raw'} pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10854#033[00m
Oct 13 12:17:11 np0005485008 nova_compute[192512]: 2025-10-13 16:17:11.769 2 DEBUG nova.virt.libvirt.driver [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10864#033[00m
Oct 13 12:17:11 np0005485008 nova_compute[192512]: 2025-10-13 16:17:11.770 2 DEBUG nova.objects.instance [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:17:11 np0005485008 podman[227534]: 2025-10-13 16:17:11.777632651 +0000 UTC m=+0.080381046 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_id=edpm, distribution-scope=public, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Oct 13 12:17:11 np0005485008 nova_compute[192512]: 2025-10-13 16:17:11.818 2 DEBUG oslo_concurrency.processutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:17:11 np0005485008 nova_compute[192512]: 2025-10-13 16:17:11.911 2 DEBUG oslo_concurrency.processutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:17:11 np0005485008 nova_compute[192512]: 2025-10-13 16:17:11.913 2 DEBUG oslo_concurrency.lockutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:17:11 np0005485008 nova_compute[192512]: 2025-10-13 16:17:11.914 2 DEBUG oslo_concurrency.lockutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:17:11 np0005485008 nova_compute[192512]: 2025-10-13 16:17:11.936 2 DEBUG oslo_concurrency.processutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:11.999 2 DEBUG oslo_concurrency.processutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.001 2 DEBUG oslo_concurrency.processutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/4c9ea3ef-43ec-461d-8a85-cc05b1330ae2/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.039 2 DEBUG oslo_concurrency.processutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/4c9ea3ef-43ec-461d-8a85-cc05b1330ae2/disk 1073741824" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.041 2 DEBUG oslo_concurrency.lockutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.041 2 DEBUG oslo_concurrency.processutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.099 2 DEBUG oslo_concurrency.processutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.100 2 DEBUG nova.virt.disk.api [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Checking if we can resize image /var/lib/nova/instances/4c9ea3ef-43ec-461d-8a85-cc05b1330ae2/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.101 2 DEBUG oslo_concurrency.processutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4c9ea3ef-43ec-461d-8a85-cc05b1330ae2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.182 2 DEBUG oslo_concurrency.processutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4c9ea3ef-43ec-461d-8a85-cc05b1330ae2/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.184 2 DEBUG nova.virt.disk.api [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Cannot resize image /var/lib/nova/instances/4c9ea3ef-43ec-461d-8a85-cc05b1330ae2/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.185 2 DEBUG nova.objects.instance [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'migration_context' on Instance uuid 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.203 2 DEBUG oslo_concurrency.processutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/4c9ea3ef-43ec-461d-8a85-cc05b1330ae2/disk.config 485376 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.231 2 DEBUG oslo_concurrency.processutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/4c9ea3ef-43ec-461d-8a85-cc05b1330ae2/disk.config 485376" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.234 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Copying file compute-0.ctlplane.example.com:/var/lib/nova/instances/4c9ea3ef-43ec-461d-8a85-cc05b1330ae2/disk.config to /var/lib/nova/instances/4c9ea3ef-43ec-461d-8a85-cc05b1330ae2 copy_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:103#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.235 2 DEBUG oslo_concurrency.processutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/4c9ea3ef-43ec-461d-8a85-cc05b1330ae2/disk.config /var/lib/nova/instances/4c9ea3ef-43ec-461d-8a85-cc05b1330ae2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.717 2 DEBUG oslo_concurrency.processutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/4c9ea3ef-43ec-461d-8a85-cc05b1330ae2/disk.config /var/lib/nova/instances/4c9ea3ef-43ec-461d-8a85-cc05b1330ae2" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.718 2 DEBUG nova.virt.libvirt.driver [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.720 2 DEBUG nova.virt.libvirt.vif [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T16:16:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-857598529',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-857598529',id=30,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T16:16:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d49081b07bec47cb94825500c1227cc5',ramdisk_id='',reservation_id='r-n57eo1wb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-1005555467',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-1005555467-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T16:16:46Z,user_data=None,user_id='885a697ea55c4785a65f452bcfd48f00',uuid=4c9ea3ef-43ec-461d-8a85-cc05b1330ae2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a030d143-cbb5-404a-927a-dea2b3268fed", "address": "fa:16:3e:c9:16:92", "network": {"id": "7719b000-5e4d-4d3a-b708-5359f703a47f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-1375252188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f5441da6d849ffb52473a57a863f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapa030d143-cb", "ovs_interfaceid": "a030d143-cbb5-404a-927a-dea2b3268fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.720 2 DEBUG nova.network.os_vif_util [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converting VIF {"id": "a030d143-cbb5-404a-927a-dea2b3268fed", "address": "fa:16:3e:c9:16:92", "network": {"id": "7719b000-5e4d-4d3a-b708-5359f703a47f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-1375252188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f5441da6d849ffb52473a57a863f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapa030d143-cb", "ovs_interfaceid": "a030d143-cbb5-404a-927a-dea2b3268fed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.721 2 DEBUG nova.network.os_vif_util [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:16:92,bridge_name='br-int',has_traffic_filtering=True,id=a030d143-cbb5-404a-927a-dea2b3268fed,network=Network(7719b000-5e4d-4d3a-b708-5359f703a47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa030d143-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.722 2 DEBUG os_vif [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:16:92,bridge_name='br-int',has_traffic_filtering=True,id=a030d143-cbb5-404a-927a-dea2b3268fed,network=Network(7719b000-5e4d-4d3a-b708-5359f703a47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa030d143-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.723 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.724 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.727 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa030d143-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.727 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa030d143-cb, col_values=(('external_ids', {'iface-id': 'a030d143-cbb5-404a-927a-dea2b3268fed', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c9:16:92', 'vm-uuid': '4c9ea3ef-43ec-461d-8a85-cc05b1330ae2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:12 np0005485008 NetworkManager[51587]: <info>  [1760372232.7306] manager: (tapa030d143-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.740 2 INFO os_vif [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:16:92,bridge_name='br-int',has_traffic_filtering=True,id=a030d143-cbb5-404a-927a-dea2b3268fed,network=Network(7719b000-5e4d-4d3a-b708-5359f703a47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa030d143-cb')#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.741 2 DEBUG nova.virt.libvirt.driver [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Oct 13 12:17:12 np0005485008 nova_compute[192512]: 2025-10-13 16:17:12.741 2 DEBUG nova.compute.manager [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpdqikxstc',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='4c9ea3ef-43ec-461d-8a85-cc05b1330ae2',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Oct 13 12:17:14 np0005485008 nova_compute[192512]: 2025-10-13 16:17:14.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:14 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:14.869 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:17:14 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:14.871 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 12:17:15 np0005485008 nova_compute[192512]: 2025-10-13 16:17:15.675 2 DEBUG nova.network.neutron [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Port a030d143-cbb5-404a-927a-dea2b3268fed updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Oct 13 12:17:15 np0005485008 nova_compute[192512]: 2025-10-13 16:17:15.677 2 DEBUG nova.compute.manager [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpdqikxstc',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='4c9ea3ef-43ec-461d-8a85-cc05b1330ae2',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Oct 13 12:17:15 np0005485008 systemd[1]: Starting libvirt proxy daemon...
Oct 13 12:17:15 np0005485008 systemd[1]: Started libvirt proxy daemon.
Oct 13 12:17:16 np0005485008 kernel: tapa030d143-cb: entered promiscuous mode
Oct 13 12:17:16 np0005485008 NetworkManager[51587]: <info>  [1760372236.1546] manager: (tapa030d143-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/103)
Oct 13 12:17:16 np0005485008 ovn_controller[94758]: 2025-10-13T16:17:16Z|00287|binding|INFO|Claiming lport a030d143-cbb5-404a-927a-dea2b3268fed for this additional chassis.
Oct 13 12:17:16 np0005485008 ovn_controller[94758]: 2025-10-13T16:17:16Z|00288|binding|INFO|a030d143-cbb5-404a-927a-dea2b3268fed: Claiming fa:16:3e:c9:16:92 10.100.0.10
Oct 13 12:17:16 np0005485008 nova_compute[192512]: 2025-10-13 16:17:16.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:16 np0005485008 systemd-machined[152551]: New machine qemu-24-instance-0000001e.
Oct 13 12:17:16 np0005485008 systemd-udevd[227611]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 12:17:16 np0005485008 nova_compute[192512]: 2025-10-13 16:17:16.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:16 np0005485008 NetworkManager[51587]: <info>  [1760372236.2251] device (tapa030d143-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 12:17:16 np0005485008 ovn_controller[94758]: 2025-10-13T16:17:16Z|00289|binding|INFO|Setting lport a030d143-cbb5-404a-927a-dea2b3268fed ovn-installed in OVS
Oct 13 12:17:16 np0005485008 NetworkManager[51587]: <info>  [1760372236.2260] device (tapa030d143-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 12:17:16 np0005485008 nova_compute[192512]: 2025-10-13 16:17:16.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:16 np0005485008 systemd[1]: Started Virtual Machine qemu-24-instance-0000001e.
Oct 13 12:17:17 np0005485008 nova_compute[192512]: 2025-10-13 16:17:17.459 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760372237.4586773, 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:17:17 np0005485008 nova_compute[192512]: 2025-10-13 16:17:17.460 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] VM Started (Lifecycle Event)#033[00m
Oct 13 12:17:17 np0005485008 nova_compute[192512]: 2025-10-13 16:17:17.494 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:17:17 np0005485008 nova_compute[192512]: 2025-10-13 16:17:17.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:17 np0005485008 nova_compute[192512]: 2025-10-13 16:17:17.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:18 np0005485008 nova_compute[192512]: 2025-10-13 16:17:18.175 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760372238.1752338, 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:17:18 np0005485008 nova_compute[192512]: 2025-10-13 16:17:18.176 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] VM Resumed (Lifecycle Event)#033[00m
Oct 13 12:17:18 np0005485008 nova_compute[192512]: 2025-10-13 16:17:18.212 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:17:18 np0005485008 nova_compute[192512]: 2025-10-13 16:17:18.216 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 12:17:18 np0005485008 nova_compute[192512]: 2025-10-13 16:17:18.273 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct 13 12:17:19 np0005485008 ovn_controller[94758]: 2025-10-13T16:17:19Z|00290|binding|INFO|Claiming lport a030d143-cbb5-404a-927a-dea2b3268fed for this chassis.
Oct 13 12:17:19 np0005485008 ovn_controller[94758]: 2025-10-13T16:17:19Z|00291|binding|INFO|a030d143-cbb5-404a-927a-dea2b3268fed: Claiming fa:16:3e:c9:16:92 10.100.0.10
Oct 13 12:17:19 np0005485008 ovn_controller[94758]: 2025-10-13T16:17:19Z|00292|binding|INFO|Setting lport a030d143-cbb5-404a-927a-dea2b3268fed up in Southbound
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.127 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:16:92 10.100.0.10'], port_security=['fa:16:3e:c9:16:92 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4c9ea3ef-43ec-461d-8a85-cc05b1330ae2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7719b000-5e4d-4d3a-b708-5359f703a47f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd49081b07bec47cb94825500c1227cc5', 'neutron:revision_number': '11', 'neutron:security_group_ids': '57819618-944c-49e8-878d-f8990222aa02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbea28d2-ff4d-4b7b-9e11-50e566aaedc6, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=a030d143-cbb5-404a-927a-dea2b3268fed) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.129 103642 INFO neutron.agent.ovn.metadata.agent [-] Port a030d143-cbb5-404a-927a-dea2b3268fed in datapath 7719b000-5e4d-4d3a-b708-5359f703a47f bound to our chassis#033[00m
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.132 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7719b000-5e4d-4d3a-b708-5359f703a47f#033[00m
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.151 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[d2bbc48a-625c-4426-942a-23691b9c97df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.153 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7719b000-51 in ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.158 214965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7719b000-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.158 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[099ea5a3-81df-4a08-b790-b4ff0daa3691]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.159 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[27177a93-a58c-4603-9f57-cf5978a612e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.181 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ae94ad-d792-4d7a-8898-022b76293f5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.201 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[c721b995-f369-4b22-8b64-98b95dc7a877]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.250 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[e26f4dc1-0e90-4f2a-a40d-d543c3f740d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:17:19 np0005485008 NetworkManager[51587]: <info>  [1760372239.2589] manager: (tap7719b000-50): new Veth device (/org/freedesktop/NetworkManager/Devices/104)
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.258 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[fc45fcd8-9da8-4e02-a2e0-5871a3688147]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:17:19 np0005485008 systemd-udevd[227649]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.306 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[2186c863-3336-42bb-80a2-b43ba2d91f26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.311 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[f53a5e2a-139f-4ee6-8d65-ce8115e3f4eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:17:19 np0005485008 NetworkManager[51587]: <info>  [1760372239.3423] device (tap7719b000-50): carrier: link connected
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.351 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[fd60d973-b3b2-40f5-95b6-ea2b3a5ae796]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:17:19 np0005485008 nova_compute[192512]: 2025-10-13 16:17:19.364 2 INFO nova.compute.manager [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Post operation of migration started#033[00m
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.374 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[c1d314c9-4d47-43ae-894b-31ca54f6e010]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7719b000-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:60:31'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 577700, 'reachable_time': 44856, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 227668, 'error': None, 'target': 'ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.398 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[f10f1557-590e-4352-8100-c6c684029c55]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:6031'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 577700, 'tstamp': 577700}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 227669, 'error': None, 'target': 'ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:17:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:17:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:17:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:17:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:17:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:17:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:17:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:17:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:17:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:17:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:17:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:17:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.426 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[327231bc-8804-4270-95ce-a244dc71a8eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7719b000-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:60:31'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 577700, 'reachable_time': 44856, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 227670, 'error': None, 'target': 'ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.472 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[3011d0c0-f836-4d73-ac61-0897dd7c322e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.559 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[75b38038-5f28-4e7f-9a48-4466b18257d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.561 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7719b000-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.562 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.562 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7719b000-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:17:19 np0005485008 kernel: tap7719b000-50: entered promiscuous mode
Oct 13 12:17:19 np0005485008 NetworkManager[51587]: <info>  [1760372239.5663] manager: (tap7719b000-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Oct 13 12:17:19 np0005485008 nova_compute[192512]: 2025-10-13 16:17:19.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:19 np0005485008 nova_compute[192512]: 2025-10-13 16:17:19.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.570 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7719b000-50, col_values=(('external_ids', {'iface-id': 'a9ea0e42-f04e-4e6a-a438-9f22d491c237'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:17:19 np0005485008 nova_compute[192512]: 2025-10-13 16:17:19.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:19 np0005485008 ovn_controller[94758]: 2025-10-13T16:17:19Z|00293|binding|INFO|Releasing lport a9ea0e42-f04e-4e6a-a438-9f22d491c237 from this chassis (sb_readonly=0)
Oct 13 12:17:19 np0005485008 nova_compute[192512]: 2025-10-13 16:17:19.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.598 103642 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7719b000-5e4d-4d3a-b708-5359f703a47f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7719b000-5e4d-4d3a-b708-5359f703a47f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.600 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0e54ff-1155-4795-889e-a7b69f3e4135]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.601 103642 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: global
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]:    log         /dev/log local0 debug
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]:    log-tag     haproxy-metadata-proxy-7719b000-5e4d-4d3a-b708-5359f703a47f
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]:    user        root
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]:    group       root
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]:    maxconn     1024
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]:    pidfile     /var/lib/neutron/external/pids/7719b000-5e4d-4d3a-b708-5359f703a47f.pid.haproxy
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]:    daemon
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: defaults
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]:    log global
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]:    mode http
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]:    option httplog
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]:    option dontlognull
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]:    option http-server-close
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]:    option forwardfor
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]:    retries                 3
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]:    timeout http-request    30s
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]:    timeout connect         30s
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]:    timeout client          32s
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]:    timeout server          32s
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]:    timeout http-keep-alive 30s
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: listen listener
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]:    bind 169.254.169.254:80
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]:    server metadata /var/lib/neutron/metadata_proxy
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]:    http-request add-header X-OVN-Network-ID 7719b000-5e4d-4d3a-b708-5359f703a47f
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 13 12:17:19 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:19.602 103642 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f', 'env', 'PROCESS_TAG=haproxy-7719b000-5e4d-4d3a-b708-5359f703a47f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7719b000-5e4d-4d3a-b708-5359f703a47f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 13 12:17:19 np0005485008 nova_compute[192512]: 2025-10-13 16:17:19.875 2 DEBUG oslo_concurrency.lockutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-4c9ea3ef-43ec-461d-8a85-cc05b1330ae2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:17:19 np0005485008 nova_compute[192512]: 2025-10-13 16:17:19.876 2 DEBUG oslo_concurrency.lockutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-4c9ea3ef-43ec-461d-8a85-cc05b1330ae2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:17:19 np0005485008 nova_compute[192512]: 2025-10-13 16:17:19.876 2 DEBUG nova.network.neutron [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 12:17:20 np0005485008 podman[227703]: 2025-10-13 16:17:20.092773104 +0000 UTC m=+0.075790290 container create 641c88d1fad7c5c6a6b4c4f2e926e63d35b866f556bc751070d87d1c9d617850 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 12:17:20 np0005485008 systemd[1]: Started libpod-conmon-641c88d1fad7c5c6a6b4c4f2e926e63d35b866f556bc751070d87d1c9d617850.scope.
Oct 13 12:17:20 np0005485008 podman[227703]: 2025-10-13 16:17:20.060065806 +0000 UTC m=+0.043082992 image pull f5d8e43d5fd113645e71c7e8980543c233298a7561a4c95ab3af27cb119e70b7 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 13 12:17:20 np0005485008 systemd[1]: Started libcrun container.
Oct 13 12:17:20 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/816502badca0d1df9181367c3712d8c414b2b67386a0fa6f296f6e15d1d8a5d2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 12:17:20 np0005485008 podman[227703]: 2025-10-13 16:17:20.220653528 +0000 UTC m=+0.203670764 container init 641c88d1fad7c5c6a6b4c4f2e926e63d35b866f556bc751070d87d1c9d617850 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 12:17:20 np0005485008 podman[227703]: 2025-10-13 16:17:20.23224888 +0000 UTC m=+0.215266066 container start 641c88d1fad7c5c6a6b4c4f2e926e63d35b866f556bc751070d87d1c9d617850 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 12:17:20 np0005485008 neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f[227719]: [NOTICE]   (227723) : New worker (227725) forked
Oct 13 12:17:20 np0005485008 neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f[227719]: [NOTICE]   (227723) : Loading success.
Oct 13 12:17:21 np0005485008 nova_compute[192512]: 2025-10-13 16:17:21.095 2 DEBUG nova.network.neutron [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Updating instance_info_cache with network_info: [{"id": "a030d143-cbb5-404a-927a-dea2b3268fed", "address": "fa:16:3e:c9:16:92", "network": {"id": "7719b000-5e4d-4d3a-b708-5359f703a47f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-1375252188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f5441da6d849ffb52473a57a863f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa030d143-cb", "ovs_interfaceid": "a030d143-cbb5-404a-927a-dea2b3268fed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:17:21 np0005485008 nova_compute[192512]: 2025-10-13 16:17:21.117 2 DEBUG oslo_concurrency.lockutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-4c9ea3ef-43ec-461d-8a85-cc05b1330ae2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:17:21 np0005485008 nova_compute[192512]: 2025-10-13 16:17:21.137 2 DEBUG oslo_concurrency.lockutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:17:21 np0005485008 nova_compute[192512]: 2025-10-13 16:17:21.137 2 DEBUG oslo_concurrency.lockutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:17:21 np0005485008 nova_compute[192512]: 2025-10-13 16:17:21.138 2 DEBUG oslo_concurrency.lockutils [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:17:21 np0005485008 nova_compute[192512]: 2025-10-13 16:17:21.143 2 INFO nova.virt.libvirt.driver [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Oct 13 12:17:21 np0005485008 virtqemud[192082]: Domain id=24 name='instance-0000001e' uuid=4c9ea3ef-43ec-461d-8a85-cc05b1330ae2 is tainted: custom-monitor
Oct 13 12:17:22 np0005485008 nova_compute[192512]: 2025-10-13 16:17:22.152 2 INFO nova.virt.libvirt.driver [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Oct 13 12:17:22 np0005485008 nova_compute[192512]: 2025-10-13 16:17:22.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:22 np0005485008 nova_compute[192512]: 2025-10-13 16:17:22.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:23 np0005485008 nova_compute[192512]: 2025-10-13 16:17:23.161 2 INFO nova.virt.libvirt.driver [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Oct 13 12:17:23 np0005485008 nova_compute[192512]: 2025-10-13 16:17:23.169 2 DEBUG nova.compute.manager [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:17:23 np0005485008 nova_compute[192512]: 2025-10-13 16:17:23.212 2 DEBUG nova.objects.instance [None req-ee9627ba-c6a9-4852-be9b-921b817a5f22 f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 13 12:17:24 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:24.873 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:17:27 np0005485008 nova_compute[192512]: 2025-10-13 16:17:27.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:27 np0005485008 nova_compute[192512]: 2025-10-13 16:17:27.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:28 np0005485008 podman[227735]: 2025-10-13 16:17:28.783381784 +0000 UTC m=+0.079008722 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 13 12:17:28 np0005485008 podman[227737]: 2025-10-13 16:17:28.794606153 +0000 UTC m=+0.088169836 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 12:17:28 np0005485008 podman[227738]: 2025-10-13 16:17:28.797160323 +0000 UTC m=+0.078274039 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 12:17:28 np0005485008 podman[227736]: 2025-10-13 16:17:28.828577362 +0000 UTC m=+0.116428197 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct 13 12:17:28 np0005485008 podman[227739]: 2025-10-13 16:17:28.851705572 +0000 UTC m=+0.133716585 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=ovn_controller)
Oct 13 12:17:29 np0005485008 nova_compute[192512]: 2025-10-13 16:17:29.432 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:17:29 np0005485008 nova_compute[192512]: 2025-10-13 16:17:29.434 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:17:30 np0005485008 nova_compute[192512]: 2025-10-13 16:17:30.921 2 DEBUG oslo_concurrency.lockutils [None req-7042f300-c667-44a2-86f5-d15959798642 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Acquiring lock "4c9ea3ef-43ec-461d-8a85-cc05b1330ae2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:17:30 np0005485008 nova_compute[192512]: 2025-10-13 16:17:30.922 2 DEBUG oslo_concurrency.lockutils [None req-7042f300-c667-44a2-86f5-d15959798642 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Lock "4c9ea3ef-43ec-461d-8a85-cc05b1330ae2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:17:30 np0005485008 nova_compute[192512]: 2025-10-13 16:17:30.923 2 DEBUG oslo_concurrency.lockutils [None req-7042f300-c667-44a2-86f5-d15959798642 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Acquiring lock "4c9ea3ef-43ec-461d-8a85-cc05b1330ae2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:17:30 np0005485008 nova_compute[192512]: 2025-10-13 16:17:30.923 2 DEBUG oslo_concurrency.lockutils [None req-7042f300-c667-44a2-86f5-d15959798642 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Lock "4c9ea3ef-43ec-461d-8a85-cc05b1330ae2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:17:30 np0005485008 nova_compute[192512]: 2025-10-13 16:17:30.923 2 DEBUG oslo_concurrency.lockutils [None req-7042f300-c667-44a2-86f5-d15959798642 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Lock "4c9ea3ef-43ec-461d-8a85-cc05b1330ae2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:17:30 np0005485008 nova_compute[192512]: 2025-10-13 16:17:30.925 2 INFO nova.compute.manager [None req-7042f300-c667-44a2-86f5-d15959798642 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Terminating instance#033[00m
Oct 13 12:17:30 np0005485008 nova_compute[192512]: 2025-10-13 16:17:30.926 2 DEBUG nova.compute.manager [None req-7042f300-c667-44a2-86f5-d15959798642 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 13 12:17:30 np0005485008 kernel: tapa030d143-cb (unregistering): left promiscuous mode
Oct 13 12:17:30 np0005485008 NetworkManager[51587]: <info>  [1760372250.9524] device (tapa030d143-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 12:17:30 np0005485008 nova_compute[192512]: 2025-10-13 16:17:30.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:30 np0005485008 ovn_controller[94758]: 2025-10-13T16:17:30Z|00294|binding|INFO|Releasing lport a030d143-cbb5-404a-927a-dea2b3268fed from this chassis (sb_readonly=0)
Oct 13 12:17:30 np0005485008 ovn_controller[94758]: 2025-10-13T16:17:30Z|00295|binding|INFO|Setting lport a030d143-cbb5-404a-927a-dea2b3268fed down in Southbound
Oct 13 12:17:30 np0005485008 ovn_controller[94758]: 2025-10-13T16:17:30Z|00296|binding|INFO|Removing iface tapa030d143-cb ovn-installed in OVS
Oct 13 12:17:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:30.993 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:16:92 10.100.0.10'], port_security=['fa:16:3e:c9:16:92 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4c9ea3ef-43ec-461d-8a85-cc05b1330ae2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7719b000-5e4d-4d3a-b708-5359f703a47f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd49081b07bec47cb94825500c1227cc5', 'neutron:revision_number': '11', 'neutron:security_group_ids': '57819618-944c-49e8-878d-f8990222aa02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbea28d2-ff4d-4b7b-9e11-50e566aaedc6, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=a030d143-cbb5-404a-927a-dea2b3268fed) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:17:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:30.995 103642 INFO neutron.agent.ovn.metadata.agent [-] Port a030d143-cbb5-404a-927a-dea2b3268fed in datapath 7719b000-5e4d-4d3a-b708-5359f703a47f unbound from our chassis#033[00m
Oct 13 12:17:30 np0005485008 nova_compute[192512]: 2025-10-13 16:17:30.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:30.996 103642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7719b000-5e4d-4d3a-b708-5359f703a47f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 13 12:17:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:30.997 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[42bd6169-55a1-4a37-a945-16b59acc6c21]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:17:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:30.998 103642 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f namespace which is not needed anymore#033[00m
Oct 13 12:17:31 np0005485008 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Oct 13 12:17:31 np0005485008 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d0000001e.scope: Consumed 2.301s CPU time.
Oct 13 12:17:31 np0005485008 systemd-machined[152551]: Machine qemu-24-instance-0000001e terminated.
Oct 13 12:17:31 np0005485008 neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f[227719]: [NOTICE]   (227723) : haproxy version is 2.8.14-c23fe91
Oct 13 12:17:31 np0005485008 neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f[227719]: [NOTICE]   (227723) : path to executable is /usr/sbin/haproxy
Oct 13 12:17:31 np0005485008 neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f[227719]: [WARNING]  (227723) : Exiting Master process...
Oct 13 12:17:31 np0005485008 neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f[227719]: [WARNING]  (227723) : Exiting Master process...
Oct 13 12:17:31 np0005485008 neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f[227719]: [ALERT]    (227723) : Current worker (227725) exited with code 143 (Terminated)
Oct 13 12:17:31 np0005485008 neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f[227719]: [WARNING]  (227723) : All workers exited. Exiting... (0)
Oct 13 12:17:31 np0005485008 systemd[1]: libpod-641c88d1fad7c5c6a6b4c4f2e926e63d35b866f556bc751070d87d1c9d617850.scope: Deactivated successfully.
Oct 13 12:17:31 np0005485008 podman[227863]: 2025-10-13 16:17:31.140265497 +0000 UTC m=+0.047070327 container died 641c88d1fad7c5c6a6b4c4f2e926e63d35b866f556bc751070d87d1c9d617850 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 12:17:31 np0005485008 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-641c88d1fad7c5c6a6b4c4f2e926e63d35b866f556bc751070d87d1c9d617850-userdata-shm.mount: Deactivated successfully.
Oct 13 12:17:31 np0005485008 systemd[1]: var-lib-containers-storage-overlay-816502badca0d1df9181367c3712d8c414b2b67386a0fa6f296f6e15d1d8a5d2-merged.mount: Deactivated successfully.
Oct 13 12:17:31 np0005485008 podman[227863]: 2025-10-13 16:17:31.184350641 +0000 UTC m=+0.091155471 container cleanup 641c88d1fad7c5c6a6b4c4f2e926e63d35b866f556bc751070d87d1c9d617850 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 13 12:17:31 np0005485008 systemd[1]: libpod-conmon-641c88d1fad7c5c6a6b4c4f2e926e63d35b866f556bc751070d87d1c9d617850.scope: Deactivated successfully.
Oct 13 12:17:31 np0005485008 nova_compute[192512]: 2025-10-13 16:17:31.206 2 INFO nova.virt.libvirt.driver [-] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Instance destroyed successfully.#033[00m
Oct 13 12:17:31 np0005485008 nova_compute[192512]: 2025-10-13 16:17:31.208 2 DEBUG nova.objects.instance [None req-7042f300-c667-44a2-86f5-d15959798642 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Lazy-loading 'resources' on Instance uuid 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:17:31 np0005485008 nova_compute[192512]: 2025-10-13 16:17:31.221 2 DEBUG nova.virt.libvirt.vif [None req-7042f300-c667-44a2-86f5-d15959798642 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-13T16:16:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-857598529',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-857598529',id=30,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T16:16:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d49081b07bec47cb94825500c1227cc5',ramdisk_id='',reservation_id='r-n57eo1wb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',clean_attempts='1',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-1005555467',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-1005555467-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T16:17:23Z,user_data=None,user_id='885a697ea55c4785a65f452bcfd48f00',uuid=4c9ea3ef-43ec-461d-8a85-cc05b1330ae2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a030d143-cbb5-404a-927a-dea2b3268fed", "address": "fa:16:3e:c9:16:92", "network": {"id": "7719b000-5e4d-4d3a-b708-5359f703a47f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-1375252188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f5441da6d849ffb52473a57a863f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa030d143-cb", "ovs_interfaceid": "a030d143-cbb5-404a-927a-dea2b3268fed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 12:17:31 np0005485008 nova_compute[192512]: 2025-10-13 16:17:31.221 2 DEBUG nova.network.os_vif_util [None req-7042f300-c667-44a2-86f5-d15959798642 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Converting VIF {"id": "a030d143-cbb5-404a-927a-dea2b3268fed", "address": "fa:16:3e:c9:16:92", "network": {"id": "7719b000-5e4d-4d3a-b708-5359f703a47f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-1375252188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f5441da6d849ffb52473a57a863f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa030d143-cb", "ovs_interfaceid": "a030d143-cbb5-404a-927a-dea2b3268fed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:17:31 np0005485008 nova_compute[192512]: 2025-10-13 16:17:31.222 2 DEBUG nova.network.os_vif_util [None req-7042f300-c667-44a2-86f5-d15959798642 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c9:16:92,bridge_name='br-int',has_traffic_filtering=True,id=a030d143-cbb5-404a-927a-dea2b3268fed,network=Network(7719b000-5e4d-4d3a-b708-5359f703a47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa030d143-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:17:31 np0005485008 nova_compute[192512]: 2025-10-13 16:17:31.222 2 DEBUG os_vif [None req-7042f300-c667-44a2-86f5-d15959798642 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:16:92,bridge_name='br-int',has_traffic_filtering=True,id=a030d143-cbb5-404a-927a-dea2b3268fed,network=Network(7719b000-5e4d-4d3a-b708-5359f703a47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa030d143-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 12:17:31 np0005485008 nova_compute[192512]: 2025-10-13 16:17:31.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:31 np0005485008 nova_compute[192512]: 2025-10-13 16:17:31.225 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa030d143-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:17:31 np0005485008 nova_compute[192512]: 2025-10-13 16:17:31.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:31 np0005485008 nova_compute[192512]: 2025-10-13 16:17:31.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:17:31 np0005485008 nova_compute[192512]: 2025-10-13 16:17:31.233 2 INFO os_vif [None req-7042f300-c667-44a2-86f5-d15959798642 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c9:16:92,bridge_name='br-int',has_traffic_filtering=True,id=a030d143-cbb5-404a-927a-dea2b3268fed,network=Network(7719b000-5e4d-4d3a-b708-5359f703a47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa030d143-cb')#033[00m
Oct 13 12:17:31 np0005485008 nova_compute[192512]: 2025-10-13 16:17:31.234 2 INFO nova.virt.libvirt.driver [None req-7042f300-c667-44a2-86f5-d15959798642 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Deleting instance files /var/lib/nova/instances/4c9ea3ef-43ec-461d-8a85-cc05b1330ae2_del#033[00m
Oct 13 12:17:31 np0005485008 nova_compute[192512]: 2025-10-13 16:17:31.235 2 INFO nova.virt.libvirt.driver [None req-7042f300-c667-44a2-86f5-d15959798642 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Deletion of /var/lib/nova/instances/4c9ea3ef-43ec-461d-8a85-cc05b1330ae2_del complete#033[00m
Oct 13 12:17:31 np0005485008 podman[227909]: 2025-10-13 16:17:31.264445075 +0000 UTC m=+0.048491471 container remove 641c88d1fad7c5c6a6b4c4f2e926e63d35b866f556bc751070d87d1c9d617850 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 12:17:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:31.271 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[f449859d-b44d-4513-90ef-efd274a18e56]: (4, ('Mon Oct 13 04:17:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f (641c88d1fad7c5c6a6b4c4f2e926e63d35b866f556bc751070d87d1c9d617850)\n641c88d1fad7c5c6a6b4c4f2e926e63d35b866f556bc751070d87d1c9d617850\nMon Oct 13 04:17:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f (641c88d1fad7c5c6a6b4c4f2e926e63d35b866f556bc751070d87d1c9d617850)\n641c88d1fad7c5c6a6b4c4f2e926e63d35b866f556bc751070d87d1c9d617850\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:17:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:31.273 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[c7a4776d-0b20-4c22-9719-6f3916708f76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:17:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:31.274 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7719b000-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:17:31 np0005485008 nova_compute[192512]: 2025-10-13 16:17:31.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:31 np0005485008 kernel: tap7719b000-50: left promiscuous mode
Oct 13 12:17:31 np0005485008 nova_compute[192512]: 2025-10-13 16:17:31.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:31.296 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[457635bb-ce63-4131-b7a6-240405bd0635]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:17:31 np0005485008 nova_compute[192512]: 2025-10-13 16:17:31.318 2 INFO nova.compute.manager [None req-7042f300-c667-44a2-86f5-d15959798642 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Took 0.39 seconds to destroy the instance on the hypervisor.#033[00m
Oct 13 12:17:31 np0005485008 nova_compute[192512]: 2025-10-13 16:17:31.319 2 DEBUG oslo.service.loopingcall [None req-7042f300-c667-44a2-86f5-d15959798642 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 13 12:17:31 np0005485008 nova_compute[192512]: 2025-10-13 16:17:31.319 2 DEBUG nova.compute.manager [-] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 13 12:17:31 np0005485008 nova_compute[192512]: 2025-10-13 16:17:31.319 2 DEBUG nova.network.neutron [-] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 13 12:17:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:31.319 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[be33a0ca-b10a-40e8-8fab-05a964f5bb05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:17:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:31.320 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[a026371d-c763-42f7-ad66-6cc46fca3395]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:17:31 np0005485008 nova_compute[192512]: 2025-10-13 16:17:31.330 2 DEBUG nova.compute.manager [req-1f00b67b-7910-4d8e-93f1-8a7c031be6c2 req-6d7e39ae-32e0-4ca8-a3e7-dd8850e034e6 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Received event network-vif-unplugged-a030d143-cbb5-404a-927a-dea2b3268fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:17:31 np0005485008 nova_compute[192512]: 2025-10-13 16:17:31.330 2 DEBUG oslo_concurrency.lockutils [req-1f00b67b-7910-4d8e-93f1-8a7c031be6c2 req-6d7e39ae-32e0-4ca8-a3e7-dd8850e034e6 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "4c9ea3ef-43ec-461d-8a85-cc05b1330ae2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:17:31 np0005485008 nova_compute[192512]: 2025-10-13 16:17:31.330 2 DEBUG oslo_concurrency.lockutils [req-1f00b67b-7910-4d8e-93f1-8a7c031be6c2 req-6d7e39ae-32e0-4ca8-a3e7-dd8850e034e6 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "4c9ea3ef-43ec-461d-8a85-cc05b1330ae2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:17:31 np0005485008 nova_compute[192512]: 2025-10-13 16:17:31.331 2 DEBUG oslo_concurrency.lockutils [req-1f00b67b-7910-4d8e-93f1-8a7c031be6c2 req-6d7e39ae-32e0-4ca8-a3e7-dd8850e034e6 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "4c9ea3ef-43ec-461d-8a85-cc05b1330ae2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:17:31 np0005485008 nova_compute[192512]: 2025-10-13 16:17:31.331 2 DEBUG nova.compute.manager [req-1f00b67b-7910-4d8e-93f1-8a7c031be6c2 req-6d7e39ae-32e0-4ca8-a3e7-dd8850e034e6 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] No waiting events found dispatching network-vif-unplugged-a030d143-cbb5-404a-927a-dea2b3268fed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:17:31 np0005485008 nova_compute[192512]: 2025-10-13 16:17:31.331 2 DEBUG nova.compute.manager [req-1f00b67b-7910-4d8e-93f1-8a7c031be6c2 req-6d7e39ae-32e0-4ca8-a3e7-dd8850e034e6 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Received event network-vif-unplugged-a030d143-cbb5-404a-927a-dea2b3268fed for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 12:17:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:31.337 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[d1c3b370-83cd-4b0c-a329-af1849b445e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 577690, 'reachable_time': 42731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 227925, 'error': None, 'target': 'ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:17:31 np0005485008 systemd[1]: run-netns-ovnmeta\x2d7719b000\x2d5e4d\x2d4d3a\x2db708\x2d5359f703a47f.mount: Deactivated successfully.
Oct 13 12:17:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:31.341 103757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 13 12:17:31 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:31.341 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[ba4a8f9e-15e6-4abe-b60e-25392cb0b656]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:17:32 np0005485008 nova_compute[192512]: 2025-10-13 16:17:32.425 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:17:32 np0005485008 nova_compute[192512]: 2025-10-13 16:17:32.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:32 np0005485008 nova_compute[192512]: 2025-10-13 16:17:32.809 2 DEBUG nova.network.neutron [-] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:17:32 np0005485008 nova_compute[192512]: 2025-10-13 16:17:32.841 2 INFO nova.compute.manager [-] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Took 1.52 seconds to deallocate network for instance.#033[00m
Oct 13 12:17:32 np0005485008 nova_compute[192512]: 2025-10-13 16:17:32.915 2 DEBUG oslo_concurrency.lockutils [None req-7042f300-c667-44a2-86f5-d15959798642 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:17:32 np0005485008 nova_compute[192512]: 2025-10-13 16:17:32.915 2 DEBUG oslo_concurrency.lockutils [None req-7042f300-c667-44a2-86f5-d15959798642 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:17:32 np0005485008 nova_compute[192512]: 2025-10-13 16:17:32.921 2 DEBUG oslo_concurrency.lockutils [None req-7042f300-c667-44a2-86f5-d15959798642 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:17:33 np0005485008 nova_compute[192512]: 2025-10-13 16:17:33.021 2 INFO nova.scheduler.client.report [None req-7042f300-c667-44a2-86f5-d15959798642 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Deleted allocations for instance 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2#033[00m
Oct 13 12:17:33 np0005485008 nova_compute[192512]: 2025-10-13 16:17:33.109 2 DEBUG oslo_concurrency.lockutils [None req-7042f300-c667-44a2-86f5-d15959798642 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Lock "4c9ea3ef-43ec-461d-8a85-cc05b1330ae2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:17:33 np0005485008 nova_compute[192512]: 2025-10-13 16:17:33.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:17:33 np0005485008 nova_compute[192512]: 2025-10-13 16:17:33.441 2 DEBUG nova.compute.manager [req-52dd5a08-5679-487b-a1d3-c13bf2e242b8 req-2b73d178-1807-41cc-8ac9-96e32cec62f7 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Received event network-vif-plugged-a030d143-cbb5-404a-927a-dea2b3268fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:17:33 np0005485008 nova_compute[192512]: 2025-10-13 16:17:33.441 2 DEBUG oslo_concurrency.lockutils [req-52dd5a08-5679-487b-a1d3-c13bf2e242b8 req-2b73d178-1807-41cc-8ac9-96e32cec62f7 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "4c9ea3ef-43ec-461d-8a85-cc05b1330ae2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:17:33 np0005485008 nova_compute[192512]: 2025-10-13 16:17:33.441 2 DEBUG oslo_concurrency.lockutils [req-52dd5a08-5679-487b-a1d3-c13bf2e242b8 req-2b73d178-1807-41cc-8ac9-96e32cec62f7 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "4c9ea3ef-43ec-461d-8a85-cc05b1330ae2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:17:33 np0005485008 nova_compute[192512]: 2025-10-13 16:17:33.442 2 DEBUG oslo_concurrency.lockutils [req-52dd5a08-5679-487b-a1d3-c13bf2e242b8 req-2b73d178-1807-41cc-8ac9-96e32cec62f7 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "4c9ea3ef-43ec-461d-8a85-cc05b1330ae2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:17:33 np0005485008 nova_compute[192512]: 2025-10-13 16:17:33.442 2 DEBUG nova.compute.manager [req-52dd5a08-5679-487b-a1d3-c13bf2e242b8 req-2b73d178-1807-41cc-8ac9-96e32cec62f7 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] No waiting events found dispatching network-vif-plugged-a030d143-cbb5-404a-927a-dea2b3268fed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:17:33 np0005485008 nova_compute[192512]: 2025-10-13 16:17:33.442 2 WARNING nova.compute.manager [req-52dd5a08-5679-487b-a1d3-c13bf2e242b8 req-2b73d178-1807-41cc-8ac9-96e32cec62f7 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Received unexpected event network-vif-plugged-a030d143-cbb5-404a-927a-dea2b3268fed for instance with vm_state deleted and task_state None.#033[00m
Oct 13 12:17:33 np0005485008 nova_compute[192512]: 2025-10-13 16:17:33.442 2 DEBUG nova.compute.manager [req-52dd5a08-5679-487b-a1d3-c13bf2e242b8 req-2b73d178-1807-41cc-8ac9-96e32cec62f7 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Received event network-vif-deleted-a030d143-cbb5-404a-927a-dea2b3268fed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:17:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:33.984 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:17:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:33.985 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:17:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:17:33.985 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:17:34 np0005485008 nova_compute[192512]: 2025-10-13 16:17:34.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:17:35 np0005485008 podman[202884]: time="2025-10-13T16:17:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:17:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:17:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:17:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:17:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3013 "" "Go-http-client/1.1"
Oct 13 12:17:36 np0005485008 nova_compute[192512]: 2025-10-13 16:17:36.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:36 np0005485008 nova_compute[192512]: 2025-10-13 16:17:36.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:17:37 np0005485008 nova_compute[192512]: 2025-10-13 16:17:37.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:17:37 np0005485008 nova_compute[192512]: 2025-10-13 16:17:37.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:17:37 np0005485008 nova_compute[192512]: 2025-10-13 16:17:37.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:17:37 np0005485008 nova_compute[192512]: 2025-10-13 16:17:37.444 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:17:37 np0005485008 nova_compute[192512]: 2025-10-13 16:17:37.445 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:17:37 np0005485008 nova_compute[192512]: 2025-10-13 16:17:37.466 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:17:37 np0005485008 nova_compute[192512]: 2025-10-13 16:17:37.467 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:17:37 np0005485008 nova_compute[192512]: 2025-10-13 16:17:37.467 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:17:37 np0005485008 nova_compute[192512]: 2025-10-13 16:17:37.468 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:17:37 np0005485008 nova_compute[192512]: 2025-10-13 16:17:37.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:37 np0005485008 nova_compute[192512]: 2025-10-13 16:17:37.700 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:17:37 np0005485008 nova_compute[192512]: 2025-10-13 16:17:37.701 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5837MB free_disk=73.46314239501953GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:17:37 np0005485008 nova_compute[192512]: 2025-10-13 16:17:37.701 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:17:37 np0005485008 nova_compute[192512]: 2025-10-13 16:17:37.702 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:17:37 np0005485008 nova_compute[192512]: 2025-10-13 16:17:37.753 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:17:37 np0005485008 nova_compute[192512]: 2025-10-13 16:17:37.753 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:17:37 np0005485008 nova_compute[192512]: 2025-10-13 16:17:37.782 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:17:37 np0005485008 nova_compute[192512]: 2025-10-13 16:17:37.796 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:17:37 np0005485008 nova_compute[192512]: 2025-10-13 16:17:37.797 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:17:37 np0005485008 nova_compute[192512]: 2025-10-13 16:17:37.797 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:17:38 np0005485008 nova_compute[192512]: 2025-10-13 16:17:38.780 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:17:40 np0005485008 nova_compute[192512]: 2025-10-13 16:17:40.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:17:41 np0005485008 nova_compute[192512]: 2025-10-13 16:17:41.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:42 np0005485008 nova_compute[192512]: 2025-10-13 16:17:42.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:42 np0005485008 podman[227927]: 2025-10-13 16:17:42.78360356 +0000 UTC m=+0.082076367 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.7)
Oct 13 12:17:46 np0005485008 nova_compute[192512]: 2025-10-13 16:17:46.206 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760372251.2046008, 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:17:46 np0005485008 nova_compute[192512]: 2025-10-13 16:17:46.207 2 INFO nova.compute.manager [-] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] VM Stopped (Lifecycle Event)#033[00m
Oct 13 12:17:46 np0005485008 nova_compute[192512]: 2025-10-13 16:17:46.234 2 DEBUG nova.compute.manager [None req-a3abf7d2-a4ec-4422-ad98-6eafce66bb8c - - - - - -] [instance: 4c9ea3ef-43ec-461d-8a85-cc05b1330ae2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:17:46 np0005485008 nova_compute[192512]: 2025-10-13 16:17:46.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:46 np0005485008 nova_compute[192512]: 2025-10-13 16:17:46.422 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:17:47 np0005485008 nova_compute[192512]: 2025-10-13 16:17:47.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:17:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:17:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:17:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:17:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:17:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:17:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:17:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:17:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:17:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:17:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:17:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:17:51 np0005485008 nova_compute[192512]: 2025-10-13 16:17:51.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:52 np0005485008 nova_compute[192512]: 2025-10-13 16:17:52.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:56 np0005485008 nova_compute[192512]: 2025-10-13 16:17:56.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:57 np0005485008 nova_compute[192512]: 2025-10-13 16:17:57.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:17:59 np0005485008 podman[227952]: 2025-10-13 16:17:59.765431749 +0000 UTC m=+0.059522185 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 13 12:17:59 np0005485008 podman[227950]: 2025-10-13 16:17:59.765673906 +0000 UTC m=+0.066285496 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 12:17:59 np0005485008 podman[227953]: 2025-10-13 16:17:59.773278773 +0000 UTC m=+0.061486216 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 12:17:59 np0005485008 podman[227959]: 2025-10-13 16:17:59.799057946 +0000 UTC m=+0.083284785 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller)
Oct 13 12:17:59 np0005485008 podman[227951]: 2025-10-13 16:17:59.804483755 +0000 UTC m=+0.098734456 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 13 12:18:01 np0005485008 nova_compute[192512]: 2025-10-13 16:18:01.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:18:02 np0005485008 nova_compute[192512]: 2025-10-13 16:18:02.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:18:05 np0005485008 podman[202884]: time="2025-10-13T16:18:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:18:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:18:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:18:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:18:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3005 "" "Go-http-client/1.1"
Oct 13 12:18:06 np0005485008 nova_compute[192512]: 2025-10-13 16:18:06.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:18:07 np0005485008 nova_compute[192512]: 2025-10-13 16:18:07.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:18:11 np0005485008 nova_compute[192512]: 2025-10-13 16:18:11.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:18:12 np0005485008 nova_compute[192512]: 2025-10-13 16:18:12.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:18:13 np0005485008 podman[228053]: 2025-10-13 16:18:13.780276491 +0000 UTC m=+0.080518719 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, version=9.6, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 13 12:18:16 np0005485008 nova_compute[192512]: 2025-10-13 16:18:16.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:18:17 np0005485008 nova_compute[192512]: 2025-10-13 16:18:17.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:18:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:18:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:18:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:18:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:18:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:18:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:18:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:18:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:18:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:18:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:18:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:18:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:18:21 np0005485008 nova_compute[192512]: 2025-10-13 16:18:21.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:18:22 np0005485008 nova_compute[192512]: 2025-10-13 16:18:22.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:18:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:18:23.828 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:18:23 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:18:23.829 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 12:18:23 np0005485008 nova_compute[192512]: 2025-10-13 16:18:23.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:18:26 np0005485008 nova_compute[192512]: 2025-10-13 16:18:26.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:18:27 np0005485008 nova_compute[192512]: 2025-10-13 16:18:27.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:18:30 np0005485008 podman[228078]: 2025-10-13 16:18:30.798389491 +0000 UTC m=+0.086317130 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd)
Oct 13 12:18:30 np0005485008 podman[228079]: 2025-10-13 16:18:30.806208084 +0000 UTC m=+0.085580147 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 12:18:30 np0005485008 podman[228080]: 2025-10-13 16:18:30.823608427 +0000 UTC m=+0.098489169 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 13 12:18:30 np0005485008 podman[228081]: 2025-10-13 16:18:30.830289954 +0000 UTC m=+0.099746167 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 12:18:30 np0005485008 podman[228087]: 2025-10-13 16:18:30.860200756 +0000 UTC m=+0.130323500 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 12:18:31 np0005485008 nova_compute[192512]: 2025-10-13 16:18:31.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:18:31 np0005485008 nova_compute[192512]: 2025-10-13 16:18:31.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:18:31 np0005485008 nova_compute[192512]: 2025-10-13 16:18:31.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:18:32 np0005485008 nova_compute[192512]: 2025-10-13 16:18:32.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:18:32 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:18:32.831 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:18:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:18:33.985 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:18:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:18:33.986 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:18:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:18:33.986 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:18:34 np0005485008 nova_compute[192512]: 2025-10-13 16:18:34.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:18:35 np0005485008 nova_compute[192512]: 2025-10-13 16:18:35.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:18:35 np0005485008 podman[202884]: time="2025-10-13T16:18:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:18:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:18:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:18:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:18:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3015 "" "Go-http-client/1.1"
Oct 13 12:18:36 np0005485008 nova_compute[192512]: 2025-10-13 16:18:36.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:18:36 np0005485008 nova_compute[192512]: 2025-10-13 16:18:36.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:18:37 np0005485008 nova_compute[192512]: 2025-10-13 16:18:37.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:18:37 np0005485008 nova_compute[192512]: 2025-10-13 16:18:37.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:18:37 np0005485008 nova_compute[192512]: 2025-10-13 16:18:37.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:18:37 np0005485008 nova_compute[192512]: 2025-10-13 16:18:37.450 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:18:37 np0005485008 nova_compute[192512]: 2025-10-13 16:18:37.451 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:18:37 np0005485008 nova_compute[192512]: 2025-10-13 16:18:37.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:18:38 np0005485008 nova_compute[192512]: 2025-10-13 16:18:38.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:18:38 np0005485008 nova_compute[192512]: 2025-10-13 16:18:38.468 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:18:38 np0005485008 nova_compute[192512]: 2025-10-13 16:18:38.468 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:18:38 np0005485008 nova_compute[192512]: 2025-10-13 16:18:38.469 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:18:38 np0005485008 nova_compute[192512]: 2025-10-13 16:18:38.469 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:18:38 np0005485008 nova_compute[192512]: 2025-10-13 16:18:38.650 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:18:38 np0005485008 nova_compute[192512]: 2025-10-13 16:18:38.654 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5855MB free_disk=73.4631233215332GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:18:38 np0005485008 nova_compute[192512]: 2025-10-13 16:18:38.654 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:18:38 np0005485008 nova_compute[192512]: 2025-10-13 16:18:38.655 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:18:38 np0005485008 nova_compute[192512]: 2025-10-13 16:18:38.730 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:18:38 np0005485008 nova_compute[192512]: 2025-10-13 16:18:38.730 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:18:38 np0005485008 nova_compute[192512]: 2025-10-13 16:18:38.750 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing inventories for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 13 12:18:38 np0005485008 nova_compute[192512]: 2025-10-13 16:18:38.786 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating ProviderTree inventory for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 13 12:18:38 np0005485008 nova_compute[192512]: 2025-10-13 16:18:38.786 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating inventory in ProviderTree for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 13 12:18:38 np0005485008 nova_compute[192512]: 2025-10-13 16:18:38.812 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing aggregate associations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 13 12:18:38 np0005485008 nova_compute[192512]: 2025-10-13 16:18:38.857 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing trait associations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce, traits: HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,HW_CPU_X86_SSE42,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,HW_CPU_X86_BMI2,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 13 12:18:38 np0005485008 nova_compute[192512]: 2025-10-13 16:18:38.888 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:18:38 np0005485008 nova_compute[192512]: 2025-10-13 16:18:38.918 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:18:38 np0005485008 nova_compute[192512]: 2025-10-13 16:18:38.919 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:18:38 np0005485008 nova_compute[192512]: 2025-10-13 16:18:38.919 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.265s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:18:40 np0005485008 nova_compute[192512]: 2025-10-13 16:18:40.920 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:18:40 np0005485008 nova_compute[192512]: 2025-10-13 16:18:40.921 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:18:41 np0005485008 nova_compute[192512]: 2025-10-13 16:18:41.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:18:42 np0005485008 nova_compute[192512]: 2025-10-13 16:18:42.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:18:44 np0005485008 podman[228186]: 2025-10-13 16:18:44.78295251 +0000 UTC m=+0.082005104 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_id=edpm, managed_by=edpm_ansible, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, version=9.6, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Oct 13 12:18:46 np0005485008 nova_compute[192512]: 2025-10-13 16:18:46.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:18:47 np0005485008 nova_compute[192512]: 2025-10-13 16:18:47.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:18:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:18:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:18:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:18:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:18:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:18:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:18:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:18:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:18:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:18:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:18:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:18:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:18:51 np0005485008 nova_compute[192512]: 2025-10-13 16:18:51.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:18:52 np0005485008 nova_compute[192512]: 2025-10-13 16:18:52.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:18:56 np0005485008 nova_compute[192512]: 2025-10-13 16:18:56.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:18:57 np0005485008 nova_compute[192512]: 2025-10-13 16:18:57.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:01 np0005485008 nova_compute[192512]: 2025-10-13 16:19:01.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:01 np0005485008 podman[228210]: 2025-10-13 16:19:01.777989798 +0000 UTC m=+0.063298313 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 12:19:01 np0005485008 podman[228209]: 2025-10-13 16:19:01.794278606 +0000 UTC m=+0.081099207 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Oct 13 12:19:01 np0005485008 podman[228207]: 2025-10-13 16:19:01.810350036 +0000 UTC m=+0.100193552 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS)
Oct 13 12:19:01 np0005485008 podman[228211]: 2025-10-13 16:19:01.83359362 +0000 UTC m=+0.110164403 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 13 12:19:01 np0005485008 podman[228208]: 2025-10-13 16:19:01.834412106 +0000 UTC m=+0.119977719 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 12:19:02 np0005485008 nova_compute[192512]: 2025-10-13 16:19:02.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:05 np0005485008 podman[202884]: time="2025-10-13T16:19:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:19:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:19:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:19:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:19:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3015 "" "Go-http-client/1.1"
Oct 13 12:19:06 np0005485008 nova_compute[192512]: 2025-10-13 16:19:06.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:07 np0005485008 nova_compute[192512]: 2025-10-13 16:19:07.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:10 np0005485008 ovn_controller[94758]: 2025-10-13T16:19:10Z|00297|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Oct 13 12:19:11 np0005485008 nova_compute[192512]: 2025-10-13 16:19:11.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:12 np0005485008 nova_compute[192512]: 2025-10-13 16:19:12.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:15 np0005485008 podman[228307]: 2025-10-13 16:19:15.780104883 +0000 UTC m=+0.085057939 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, distribution-scope=public, name=ubi9-minimal, release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 13 12:19:16 np0005485008 nova_compute[192512]: 2025-10-13 16:19:16.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:17 np0005485008 nova_compute[192512]: 2025-10-13 16:19:17.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:18 np0005485008 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 13 12:19:19 np0005485008 nova_compute[192512]: 2025-10-13 16:19:19.225 2 DEBUG nova.virt.libvirt.driver [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Creating tmpfile /var/lib/nova/instances/tmp3dr9gatg to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Oct 13 12:19:19 np0005485008 nova_compute[192512]: 2025-10-13 16:19:19.227 2 DEBUG nova.compute.manager [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3dr9gatg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Oct 13 12:19:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:19:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:19:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:19:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:19:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:19:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:19:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:19:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:19:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:19:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:19:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:19:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:19:20 np0005485008 nova_compute[192512]: 2025-10-13 16:19:20.386 2 DEBUG nova.compute.manager [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3dr9gatg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='d2984f9e-f5e9-4b51-8567-7fa736f90221',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Oct 13 12:19:20 np0005485008 nova_compute[192512]: 2025-10-13 16:19:20.431 2 DEBUG oslo_concurrency.lockutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-d2984f9e-f5e9-4b51-8567-7fa736f90221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:19:20 np0005485008 nova_compute[192512]: 2025-10-13 16:19:20.431 2 DEBUG oslo_concurrency.lockutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-d2984f9e-f5e9-4b51-8567-7fa736f90221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:19:20 np0005485008 nova_compute[192512]: 2025-10-13 16:19:20.431 2 DEBUG nova.network.neutron [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 12:19:21 np0005485008 nova_compute[192512]: 2025-10-13 16:19:21.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.269 2 DEBUG nova.network.neutron [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Updating instance_info_cache with network_info: [{"id": "e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be", "address": "fa:16:3e:10:48:f9", "network": {"id": "7719b000-5e4d-4d3a-b708-5359f703a47f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-1375252188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f5441da6d849ffb52473a57a863f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6c7f79f-bc", "ovs_interfaceid": "e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.294 2 DEBUG oslo_concurrency.lockutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-d2984f9e-f5e9-4b51-8567-7fa736f90221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.296 2 DEBUG nova.virt.libvirt.driver [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3dr9gatg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='d2984f9e-f5e9-4b51-8567-7fa736f90221',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.297 2 DEBUG nova.virt.libvirt.driver [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Creating instance directory: /var/lib/nova/instances/d2984f9e-f5e9-4b51-8567-7fa736f90221 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.298 2 DEBUG nova.virt.libvirt.driver [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Creating disk.info with the contents: {'/var/lib/nova/instances/d2984f9e-f5e9-4b51-8567-7fa736f90221/disk': 'qcow2', '/var/lib/nova/instances/d2984f9e-f5e9-4b51-8567-7fa736f90221/disk.config': 'raw'} pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10854#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.300 2 DEBUG nova.virt.libvirt.driver [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10864#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.301 2 DEBUG nova.objects.instance [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid d2984f9e-f5e9-4b51-8567-7fa736f90221 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.344 2 DEBUG oslo_concurrency.processutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.421 2 DEBUG oslo_concurrency.processutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.424 2 DEBUG oslo_concurrency.lockutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.425 2 DEBUG oslo_concurrency.lockutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.436 2 DEBUG oslo_concurrency.processutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.500 2 DEBUG oslo_concurrency.processutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.502 2 DEBUG oslo_concurrency.processutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/d2984f9e-f5e9-4b51-8567-7fa736f90221/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.537 2 DEBUG oslo_concurrency.processutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/d2984f9e-f5e9-4b51-8567-7fa736f90221/disk 1073741824" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.538 2 DEBUG oslo_concurrency.lockutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.539 2 DEBUG oslo_concurrency.processutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.594 2 DEBUG oslo_concurrency.processutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.596 2 DEBUG nova.virt.disk.api [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Checking if we can resize image /var/lib/nova/instances/d2984f9e-f5e9-4b51-8567-7fa736f90221/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.597 2 DEBUG oslo_concurrency.processutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2984f9e-f5e9-4b51-8567-7fa736f90221/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.663 2 DEBUG oslo_concurrency.processutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2984f9e-f5e9-4b51-8567-7fa736f90221/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.665 2 DEBUG nova.virt.disk.api [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Cannot resize image /var/lib/nova/instances/d2984f9e-f5e9-4b51-8567-7fa736f90221/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.666 2 DEBUG nova.objects.instance [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'migration_context' on Instance uuid d2984f9e-f5e9-4b51-8567-7fa736f90221 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.692 2 DEBUG oslo_concurrency.processutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/d2984f9e-f5e9-4b51-8567-7fa736f90221/disk.config 485376 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.725 2 DEBUG oslo_concurrency.processutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/d2984f9e-f5e9-4b51-8567-7fa736f90221/disk.config 485376" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.728 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Copying file compute-0.ctlplane.example.com:/var/lib/nova/instances/d2984f9e-f5e9-4b51-8567-7fa736f90221/disk.config to /var/lib/nova/instances/d2984f9e-f5e9-4b51-8567-7fa736f90221 copy_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:103#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.729 2 DEBUG oslo_concurrency.processutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/d2984f9e-f5e9-4b51-8567-7fa736f90221/disk.config /var/lib/nova/instances/d2984f9e-f5e9-4b51-8567-7fa736f90221 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:19:22 np0005485008 nova_compute[192512]: 2025-10-13 16:19:22.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:23 np0005485008 nova_compute[192512]: 2025-10-13 16:19:23.166 2 DEBUG oslo_concurrency.processutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/d2984f9e-f5e9-4b51-8567-7fa736f90221/disk.config /var/lib/nova/instances/d2984f9e-f5e9-4b51-8567-7fa736f90221" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:19:23 np0005485008 nova_compute[192512]: 2025-10-13 16:19:23.167 2 DEBUG nova.virt.libvirt.driver [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Oct 13 12:19:23 np0005485008 nova_compute[192512]: 2025-10-13 16:19:23.169 2 DEBUG nova.virt.libvirt.vif [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T16:18:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-848568148',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-848568148',id=34,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T16:18:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d49081b07bec47cb94825500c1227cc5',ramdisk_id='',reservation_id='r-r1ygpp79',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-1005555467',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-1005555467-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T16:18:48Z,user_data=None,user_id='885a697ea55c4785a65f452bcfd48f00',uuid=d2984f9e-f5e9-4b51-8567-7fa736f90221,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be", "address": "fa:16:3e:10:48:f9", "network": {"id": "7719b000-5e4d-4d3a-b708-5359f703a47f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-1375252188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f5441da6d849ffb52473a57a863f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tape6c7f79f-bc", "ovs_interfaceid": "e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 12:19:23 np0005485008 nova_compute[192512]: 2025-10-13 16:19:23.169 2 DEBUG nova.network.os_vif_util [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converting VIF {"id": "e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be", "address": "fa:16:3e:10:48:f9", "network": {"id": "7719b000-5e4d-4d3a-b708-5359f703a47f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-1375252188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f5441da6d849ffb52473a57a863f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tape6c7f79f-bc", "ovs_interfaceid": "e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:19:23 np0005485008 nova_compute[192512]: 2025-10-13 16:19:23.170 2 DEBUG nova.network.os_vif_util [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:48:f9,bridge_name='br-int',has_traffic_filtering=True,id=e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be,network=Network(7719b000-5e4d-4d3a-b708-5359f703a47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6c7f79f-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:19:23 np0005485008 nova_compute[192512]: 2025-10-13 16:19:23.171 2 DEBUG os_vif [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:48:f9,bridge_name='br-int',has_traffic_filtering=True,id=e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be,network=Network(7719b000-5e4d-4d3a-b708-5359f703a47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6c7f79f-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 12:19:23 np0005485008 nova_compute[192512]: 2025-10-13 16:19:23.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:23 np0005485008 nova_compute[192512]: 2025-10-13 16:19:23.172 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:19:23 np0005485008 nova_compute[192512]: 2025-10-13 16:19:23.173 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:19:23 np0005485008 nova_compute[192512]: 2025-10-13 16:19:23.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:23 np0005485008 nova_compute[192512]: 2025-10-13 16:19:23.176 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape6c7f79f-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:19:23 np0005485008 nova_compute[192512]: 2025-10-13 16:19:23.176 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape6c7f79f-bc, col_values=(('external_ids', {'iface-id': 'e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:48:f9', 'vm-uuid': 'd2984f9e-f5e9-4b51-8567-7fa736f90221'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:19:23 np0005485008 NetworkManager[51587]: <info>  [1760372363.1802] manager: (tape6c7f79f-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Oct 13 12:19:23 np0005485008 nova_compute[192512]: 2025-10-13 16:19:23.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:19:23 np0005485008 nova_compute[192512]: 2025-10-13 16:19:23.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:23 np0005485008 nova_compute[192512]: 2025-10-13 16:19:23.187 2 INFO os_vif [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:48:f9,bridge_name='br-int',has_traffic_filtering=True,id=e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be,network=Network(7719b000-5e4d-4d3a-b708-5359f703a47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6c7f79f-bc')#033[00m
Oct 13 12:19:23 np0005485008 nova_compute[192512]: 2025-10-13 16:19:23.187 2 DEBUG nova.virt.libvirt.driver [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Oct 13 12:19:23 np0005485008 nova_compute[192512]: 2025-10-13 16:19:23.187 2 DEBUG nova.compute.manager [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3dr9gatg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='d2984f9e-f5e9-4b51-8567-7fa736f90221',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Oct 13 12:19:24 np0005485008 nova_compute[192512]: 2025-10-13 16:19:24.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:24 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:24.829 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:19:24 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:24.830 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 12:19:25 np0005485008 nova_compute[192512]: 2025-10-13 16:19:25.271 2 DEBUG nova.network.neutron [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Port e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Oct 13 12:19:25 np0005485008 nova_compute[192512]: 2025-10-13 16:19:25.273 2 DEBUG nova.compute.manager [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=74752,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3dr9gatg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='d2984f9e-f5e9-4b51-8567-7fa736f90221',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Oct 13 12:19:25 np0005485008 systemd[1]: Starting libvirt proxy daemon...
Oct 13 12:19:25 np0005485008 systemd[1]: Started libvirt proxy daemon.
Oct 13 12:19:25 np0005485008 kernel: tape6c7f79f-bc: entered promiscuous mode
Oct 13 12:19:25 np0005485008 NetworkManager[51587]: <info>  [1760372365.7029] manager: (tape6c7f79f-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/107)
Oct 13 12:19:25 np0005485008 ovn_controller[94758]: 2025-10-13T16:19:25Z|00298|binding|INFO|Claiming lport e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be for this additional chassis.
Oct 13 12:19:25 np0005485008 ovn_controller[94758]: 2025-10-13T16:19:25Z|00299|binding|INFO|e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be: Claiming fa:16:3e:10:48:f9 10.100.0.14
Oct 13 12:19:25 np0005485008 nova_compute[192512]: 2025-10-13 16:19:25.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:25 np0005485008 ovn_controller[94758]: 2025-10-13T16:19:25Z|00300|binding|INFO|Setting lport e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be ovn-installed in OVS
Oct 13 12:19:25 np0005485008 nova_compute[192512]: 2025-10-13 16:19:25.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:25 np0005485008 nova_compute[192512]: 2025-10-13 16:19:25.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:25 np0005485008 nova_compute[192512]: 2025-10-13 16:19:25.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:25 np0005485008 systemd-machined[152551]: New machine qemu-25-instance-00000022.
Oct 13 12:19:25 np0005485008 systemd-udevd[228385]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 12:19:25 np0005485008 systemd[1]: Started Virtual Machine qemu-25-instance-00000022.
Oct 13 12:19:25 np0005485008 NetworkManager[51587]: <info>  [1760372365.7732] device (tape6c7f79f-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 12:19:25 np0005485008 NetworkManager[51587]: <info>  [1760372365.7743] device (tape6c7f79f-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 12:19:27 np0005485008 nova_compute[192512]: 2025-10-13 16:19:27.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:27 np0005485008 nova_compute[192512]: 2025-10-13 16:19:27.922 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760372367.9221487, d2984f9e-f5e9-4b51-8567-7fa736f90221 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:19:27 np0005485008 nova_compute[192512]: 2025-10-13 16:19:27.923 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] VM Started (Lifecycle Event)#033[00m
Oct 13 12:19:27 np0005485008 nova_compute[192512]: 2025-10-13 16:19:27.950 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:19:28 np0005485008 nova_compute[192512]: 2025-10-13 16:19:28.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:28 np0005485008 nova_compute[192512]: 2025-10-13 16:19:28.528 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760372368.5274873, d2984f9e-f5e9-4b51-8567-7fa736f90221 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:19:28 np0005485008 nova_compute[192512]: 2025-10-13 16:19:28.529 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] VM Resumed (Lifecycle Event)#033[00m
Oct 13 12:19:28 np0005485008 nova_compute[192512]: 2025-10-13 16:19:28.553 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:19:28 np0005485008 nova_compute[192512]: 2025-10-13 16:19:28.557 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 12:19:28 np0005485008 nova_compute[192512]: 2025-10-13 16:19:28.633 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct 13 12:19:30 np0005485008 ovn_controller[94758]: 2025-10-13T16:19:30Z|00301|binding|INFO|Claiming lport e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be for this chassis.
Oct 13 12:19:30 np0005485008 ovn_controller[94758]: 2025-10-13T16:19:30Z|00302|binding|INFO|e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be: Claiming fa:16:3e:10:48:f9 10.100.0.14
Oct 13 12:19:30 np0005485008 ovn_controller[94758]: 2025-10-13T16:19:30Z|00303|binding|INFO|Setting lport e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be up in Southbound
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.367 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:48:f9 10.100.0.14'], port_security=['fa:16:3e:10:48:f9 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd2984f9e-f5e9-4b51-8567-7fa736f90221', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7719b000-5e4d-4d3a-b708-5359f703a47f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd49081b07bec47cb94825500c1227cc5', 'neutron:revision_number': '11', 'neutron:security_group_ids': '57819618-944c-49e8-878d-f8990222aa02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbea28d2-ff4d-4b7b-9e11-50e566aaedc6, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.368 103642 INFO neutron.agent.ovn.metadata.agent [-] Port e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be in datapath 7719b000-5e4d-4d3a-b708-5359f703a47f bound to our chassis#033[00m
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.369 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7719b000-5e4d-4d3a-b708-5359f703a47f#033[00m
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.384 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[7f939f0f-4a23-4833-b9e0-450bf2e3bd6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.385 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7719b000-51 in ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.387 214965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7719b000-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.387 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[48ab61e2-cdd1-47f0-a84d-add2b56c5beb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.388 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[74e55a79-9613-4eb1-95d6-7d270ce3eb6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.403 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[bd229d26-184c-4f67-9038-2fc170f7b021]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.431 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[5e985e19-1e88-41ba-9b0d-da2d1da3fe60]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.472 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[a5e15c04-41bf-4604-9a5f-0de1cee8e924]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.480 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[daaa7a6c-96fc-436c-ae8a-6205ea54c22d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:19:30 np0005485008 NetworkManager[51587]: <info>  [1760372370.4829] manager: (tap7719b000-50): new Veth device (/org/freedesktop/NetworkManager/Devices/108)
Oct 13 12:19:30 np0005485008 systemd-udevd[228419]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.518 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[c0e39a13-1dc0-49fd-af7c-a3c029896f99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.523 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[809718c4-2c2a-47b2-a671-3e794d7a7cb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:19:30 np0005485008 NetworkManager[51587]: <info>  [1760372370.5538] device (tap7719b000-50): carrier: link connected
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.560 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[36265f77-afbd-4f4b-9a58-18a8682e959c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.578 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[2f07f175-37c1-43f3-982e-380e22f883c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7719b000-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:60:31'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 74], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 590821, 'reachable_time': 19061, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 228438, 'error': None, 'target': 'ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.601 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[29098044-579b-44fb-9170-8e52cfd569ec]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:6031'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 590821, 'tstamp': 590821}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 228439, 'error': None, 'target': 'ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.628 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[71f7c4dc-c9a5-4317-aef1-98856798d58f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7719b000-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:60:31'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 74], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 590821, 'reachable_time': 19061, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 228440, 'error': None, 'target': 'ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.683 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[e22655e6-c125-4da4-a431-9f629515dd76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:19:30 np0005485008 nova_compute[192512]: 2025-10-13 16:19:30.693 2 INFO nova.compute.manager [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Post operation of migration started#033[00m
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.764 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[5408046d-4233-4a8e-a94b-cabae5433627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.766 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7719b000-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.766 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.767 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7719b000-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:19:30 np0005485008 nova_compute[192512]: 2025-10-13 16:19:30.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:30 np0005485008 NetworkManager[51587]: <info>  [1760372370.7709] manager: (tap7719b000-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Oct 13 12:19:30 np0005485008 kernel: tap7719b000-50: entered promiscuous mode
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.774 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7719b000-50, col_values=(('external_ids', {'iface-id': 'a9ea0e42-f04e-4e6a-a438-9f22d491c237'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:19:30 np0005485008 ovn_controller[94758]: 2025-10-13T16:19:30Z|00304|binding|INFO|Releasing lport a9ea0e42-f04e-4e6a-a438-9f22d491c237 from this chassis (sb_readonly=0)
Oct 13 12:19:30 np0005485008 nova_compute[192512]: 2025-10-13 16:19:30.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:30 np0005485008 nova_compute[192512]: 2025-10-13 16:19:30.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.777 103642 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7719b000-5e4d-4d3a-b708-5359f703a47f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7719b000-5e4d-4d3a-b708-5359f703a47f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.779 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[7f533c17-2887-4114-90a9-4b234f84c07c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.780 103642 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: global
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]:    log         /dev/log local0 debug
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]:    log-tag     haproxy-metadata-proxy-7719b000-5e4d-4d3a-b708-5359f703a47f
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]:    user        root
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]:    group       root
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]:    maxconn     1024
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]:    pidfile     /var/lib/neutron/external/pids/7719b000-5e4d-4d3a-b708-5359f703a47f.pid.haproxy
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]:    daemon
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: defaults
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]:    log global
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]:    mode http
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]:    option httplog
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]:    option dontlognull
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]:    option http-server-close
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]:    option forwardfor
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]:    retries                 3
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]:    timeout http-request    30s
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]:    timeout connect         30s
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]:    timeout client          32s
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]:    timeout server          32s
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]:    timeout http-keep-alive 30s
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: listen listener
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]:    bind 169.254.169.254:80
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]:    server metadata /var/lib/neutron/metadata_proxy
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]:    http-request add-header X-OVN-Network-ID 7719b000-5e4d-4d3a-b708-5359f703a47f
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 13 12:19:30 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:30.781 103642 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f', 'env', 'PROCESS_TAG=haproxy-7719b000-5e4d-4d3a-b708-5359f703a47f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7719b000-5e4d-4d3a-b708-5359f703a47f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 13 12:19:30 np0005485008 nova_compute[192512]: 2025-10-13 16:19:30.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:31 np0005485008 podman[228473]: 2025-10-13 16:19:31.2201798 +0000 UTC m=+0.058045499 container create 694474f52ea57c7c1805e333cc233112d13b3d0b2e3238afacd0ab04430afff5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 13 12:19:31 np0005485008 nova_compute[192512]: 2025-10-13 16:19:31.223 2 DEBUG oslo_concurrency.lockutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-d2984f9e-f5e9-4b51-8567-7fa736f90221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:19:31 np0005485008 nova_compute[192512]: 2025-10-13 16:19:31.224 2 DEBUG oslo_concurrency.lockutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-d2984f9e-f5e9-4b51-8567-7fa736f90221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:19:31 np0005485008 nova_compute[192512]: 2025-10-13 16:19:31.226 2 DEBUG nova.network.neutron [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 12:19:31 np0005485008 systemd[1]: Started libpod-conmon-694474f52ea57c7c1805e333cc233112d13b3d0b2e3238afacd0ab04430afff5.scope.
Oct 13 12:19:31 np0005485008 podman[228473]: 2025-10-13 16:19:31.188344088 +0000 UTC m=+0.026209797 image pull f5d8e43d5fd113645e71c7e8980543c233298a7561a4c95ab3af27cb119e70b7 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 13 12:19:31 np0005485008 systemd[1]: Started libcrun container.
Oct 13 12:19:31 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4ea7fa2966f0b264b885dfcf30c8290ee53368dfe1724ae0dc3edf00c9a0368/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 12:19:31 np0005485008 podman[228473]: 2025-10-13 16:19:31.316567463 +0000 UTC m=+0.154433182 container init 694474f52ea57c7c1805e333cc233112d13b3d0b2e3238afacd0ab04430afff5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 12:19:31 np0005485008 podman[228473]: 2025-10-13 16:19:31.324093637 +0000 UTC m=+0.161959336 container start 694474f52ea57c7c1805e333cc233112d13b3d0b2e3238afacd0ab04430afff5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 12:19:31 np0005485008 neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f[228488]: [NOTICE]   (228492) : New worker (228494) forked
Oct 13 12:19:31 np0005485008 neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f[228488]: [NOTICE]   (228492) : Loading success.
Oct 13 12:19:32 np0005485008 nova_compute[192512]: 2025-10-13 16:19:32.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:19:32 np0005485008 nova_compute[192512]: 2025-10-13 16:19:32.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:19:32 np0005485008 podman[228505]: 2025-10-13 16:19:32.785208529 +0000 UTC m=+0.063674945 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 12:19:32 np0005485008 podman[228503]: 2025-10-13 16:19:32.795166909 +0000 UTC m=+0.084797552 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct 13 12:19:32 np0005485008 podman[228506]: 2025-10-13 16:19:32.806914504 +0000 UTC m=+0.083004046 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 12:19:32 np0005485008 nova_compute[192512]: 2025-10-13 16:19:32.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:32 np0005485008 podman[228504]: 2025-10-13 16:19:32.818871987 +0000 UTC m=+0.109426929 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=iscsid, org.label-schema.license=GPLv2, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 13 12:19:32 np0005485008 podman[228507]: 2025-10-13 16:19:32.855511319 +0000 UTC m=+0.128491834 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 13 12:19:33 np0005485008 nova_compute[192512]: 2025-10-13 16:19:33.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:33.987 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:19:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:33.988 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:19:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:33.988 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:19:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:34.833 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:19:35 np0005485008 nova_compute[192512]: 2025-10-13 16:19:35.424 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:19:35 np0005485008 nova_compute[192512]: 2025-10-13 16:19:35.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:19:35 np0005485008 podman[202884]: time="2025-10-13T16:19:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:19:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:19:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 12:19:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:19:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3475 "" "Go-http-client/1.1"
Oct 13 12:19:35 np0005485008 nova_compute[192512]: 2025-10-13 16:19:35.894 2 DEBUG nova.network.neutron [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Updating instance_info_cache with network_info: [{"id": "e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be", "address": "fa:16:3e:10:48:f9", "network": {"id": "7719b000-5e4d-4d3a-b708-5359f703a47f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-1375252188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f5441da6d849ffb52473a57a863f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6c7f79f-bc", "ovs_interfaceid": "e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:19:35 np0005485008 nova_compute[192512]: 2025-10-13 16:19:35.922 2 DEBUG oslo_concurrency.lockutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-d2984f9e-f5e9-4b51-8567-7fa736f90221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:19:35 np0005485008 nova_compute[192512]: 2025-10-13 16:19:35.947 2 DEBUG oslo_concurrency.lockutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:19:35 np0005485008 nova_compute[192512]: 2025-10-13 16:19:35.947 2 DEBUG oslo_concurrency.lockutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:19:35 np0005485008 nova_compute[192512]: 2025-10-13 16:19:35.947 2 DEBUG oslo_concurrency.lockutils [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:19:35 np0005485008 nova_compute[192512]: 2025-10-13 16:19:35.954 2 INFO nova.virt.libvirt.driver [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Oct 13 12:19:35 np0005485008 virtqemud[192082]: Domain id=25 name='instance-00000022' uuid=d2984f9e-f5e9-4b51-8567-7fa736f90221 is tainted: custom-monitor
Oct 13 12:19:36 np0005485008 nova_compute[192512]: 2025-10-13 16:19:36.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:19:36 np0005485008 nova_compute[192512]: 2025-10-13 16:19:36.967 2 INFO nova.virt.libvirt.driver [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Oct 13 12:19:37 np0005485008 nova_compute[192512]: 2025-10-13 16:19:37.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:37 np0005485008 nova_compute[192512]: 2025-10-13 16:19:37.975 2 INFO nova.virt.libvirt.driver [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Oct 13 12:19:37 np0005485008 nova_compute[192512]: 2025-10-13 16:19:37.983 2 DEBUG nova.compute.manager [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:19:38 np0005485008 nova_compute[192512]: 2025-10-13 16:19:38.038 2 DEBUG nova.objects.instance [None req-a586bc1c-983d-4491-8ff0-6060b46e9c7e f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 13 12:19:38 np0005485008 nova_compute[192512]: 2025-10-13 16:19:38.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:38 np0005485008 nova_compute[192512]: 2025-10-13 16:19:38.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:19:38 np0005485008 nova_compute[192512]: 2025-10-13 16:19:38.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:19:38 np0005485008 nova_compute[192512]: 2025-10-13 16:19:38.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:19:38 np0005485008 nova_compute[192512]: 2025-10-13 16:19:38.833 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "refresh_cache-d2984f9e-f5e9-4b51-8567-7fa736f90221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:19:38 np0005485008 nova_compute[192512]: 2025-10-13 16:19:38.833 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquired lock "refresh_cache-d2984f9e-f5e9-4b51-8567-7fa736f90221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:19:38 np0005485008 nova_compute[192512]: 2025-10-13 16:19:38.833 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 13 12:19:38 np0005485008 nova_compute[192512]: 2025-10-13 16:19:38.834 2 DEBUG nova.objects.instance [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d2984f9e-f5e9-4b51-8567-7fa736f90221 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:19:40 np0005485008 nova_compute[192512]: 2025-10-13 16:19:40.554 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Updating instance_info_cache with network_info: [{"id": "e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be", "address": "fa:16:3e:10:48:f9", "network": {"id": "7719b000-5e4d-4d3a-b708-5359f703a47f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-1375252188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f5441da6d849ffb52473a57a863f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6c7f79f-bc", "ovs_interfaceid": "e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:19:40 np0005485008 nova_compute[192512]: 2025-10-13 16:19:40.588 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Releasing lock "refresh_cache-d2984f9e-f5e9-4b51-8567-7fa736f90221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:19:40 np0005485008 nova_compute[192512]: 2025-10-13 16:19:40.589 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 13 12:19:40 np0005485008 nova_compute[192512]: 2025-10-13 16:19:40.590 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:19:40 np0005485008 nova_compute[192512]: 2025-10-13 16:19:40.591 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:19:40 np0005485008 nova_compute[192512]: 2025-10-13 16:19:40.591 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:19:40 np0005485008 nova_compute[192512]: 2025-10-13 16:19:40.622 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:19:40 np0005485008 nova_compute[192512]: 2025-10-13 16:19:40.623 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:19:40 np0005485008 nova_compute[192512]: 2025-10-13 16:19:40.624 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:19:40 np0005485008 nova_compute[192512]: 2025-10-13 16:19:40.624 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:19:40 np0005485008 nova_compute[192512]: 2025-10-13 16:19:40.709 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2984f9e-f5e9-4b51-8567-7fa736f90221/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:19:40 np0005485008 nova_compute[192512]: 2025-10-13 16:19:40.806 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2984f9e-f5e9-4b51-8567-7fa736f90221/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:19:40 np0005485008 nova_compute[192512]: 2025-10-13 16:19:40.808 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2984f9e-f5e9-4b51-8567-7fa736f90221/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:19:40 np0005485008 nova_compute[192512]: 2025-10-13 16:19:40.884 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d2984f9e-f5e9-4b51-8567-7fa736f90221/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:19:41 np0005485008 nova_compute[192512]: 2025-10-13 16:19:41.048 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:19:41 np0005485008 nova_compute[192512]: 2025-10-13 16:19:41.050 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5640MB free_disk=73.43428039550781GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:19:41 np0005485008 nova_compute[192512]: 2025-10-13 16:19:41.050 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:19:41 np0005485008 nova_compute[192512]: 2025-10-13 16:19:41.050 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:19:41 np0005485008 nova_compute[192512]: 2025-10-13 16:19:41.152 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance d2984f9e-f5e9-4b51-8567-7fa736f90221 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 13 12:19:41 np0005485008 nova_compute[192512]: 2025-10-13 16:19:41.153 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:19:41 np0005485008 nova_compute[192512]: 2025-10-13 16:19:41.153 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:19:41 np0005485008 nova_compute[192512]: 2025-10-13 16:19:41.207 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:19:41 np0005485008 nova_compute[192512]: 2025-10-13 16:19:41.221 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:19:41 np0005485008 nova_compute[192512]: 2025-10-13 16:19:41.260 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:19:41 np0005485008 nova_compute[192512]: 2025-10-13 16:19:41.261 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:19:41 np0005485008 nova_compute[192512]: 2025-10-13 16:19:41.961 2 DEBUG oslo_concurrency.lockutils [None req-dd2bf4be-e2c0-46a2-934a-78f69d3ddf05 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Acquiring lock "d2984f9e-f5e9-4b51-8567-7fa736f90221" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:19:41 np0005485008 nova_compute[192512]: 2025-10-13 16:19:41.961 2 DEBUG oslo_concurrency.lockutils [None req-dd2bf4be-e2c0-46a2-934a-78f69d3ddf05 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Lock "d2984f9e-f5e9-4b51-8567-7fa736f90221" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:19:41 np0005485008 nova_compute[192512]: 2025-10-13 16:19:41.962 2 DEBUG oslo_concurrency.lockutils [None req-dd2bf4be-e2c0-46a2-934a-78f69d3ddf05 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Acquiring lock "d2984f9e-f5e9-4b51-8567-7fa736f90221-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:19:41 np0005485008 nova_compute[192512]: 2025-10-13 16:19:41.962 2 DEBUG oslo_concurrency.lockutils [None req-dd2bf4be-e2c0-46a2-934a-78f69d3ddf05 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Lock "d2984f9e-f5e9-4b51-8567-7fa736f90221-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:19:41 np0005485008 nova_compute[192512]: 2025-10-13 16:19:41.963 2 DEBUG oslo_concurrency.lockutils [None req-dd2bf4be-e2c0-46a2-934a-78f69d3ddf05 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Lock "d2984f9e-f5e9-4b51-8567-7fa736f90221-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:19:41 np0005485008 nova_compute[192512]: 2025-10-13 16:19:41.965 2 INFO nova.compute.manager [None req-dd2bf4be-e2c0-46a2-934a-78f69d3ddf05 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Terminating instance#033[00m
Oct 13 12:19:41 np0005485008 nova_compute[192512]: 2025-10-13 16:19:41.967 2 DEBUG nova.compute.manager [None req-dd2bf4be-e2c0-46a2-934a-78f69d3ddf05 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 13 12:19:41 np0005485008 kernel: tape6c7f79f-bc (unregistering): left promiscuous mode
Oct 13 12:19:42 np0005485008 NetworkManager[51587]: <info>  [1760372382.0013] device (tape6c7f79f-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 12:19:42 np0005485008 ovn_controller[94758]: 2025-10-13T16:19:42Z|00305|binding|INFO|Releasing lport e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be from this chassis (sb_readonly=0)
Oct 13 12:19:42 np0005485008 ovn_controller[94758]: 2025-10-13T16:19:42Z|00306|binding|INFO|Setting lport e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be down in Southbound
Oct 13 12:19:42 np0005485008 nova_compute[192512]: 2025-10-13 16:19:42.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:42 np0005485008 ovn_controller[94758]: 2025-10-13T16:19:42Z|00307|binding|INFO|Removing iface tape6c7f79f-bc ovn-installed in OVS
Oct 13 12:19:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:42.027 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:48:f9 10.100.0.14'], port_security=['fa:16:3e:10:48:f9 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd2984f9e-f5e9-4b51-8567-7fa736f90221', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7719b000-5e4d-4d3a-b708-5359f703a47f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd49081b07bec47cb94825500c1227cc5', 'neutron:revision_number': '13', 'neutron:security_group_ids': '57819618-944c-49e8-878d-f8990222aa02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbea28d2-ff4d-4b7b-9e11-50e566aaedc6, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:19:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:42.029 103642 INFO neutron.agent.ovn.metadata.agent [-] Port e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be in datapath 7719b000-5e4d-4d3a-b708-5359f703a47f unbound from our chassis#033[00m
Oct 13 12:19:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:42.030 103642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7719b000-5e4d-4d3a-b708-5359f703a47f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 13 12:19:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:42.032 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[32d7dd72-18af-4520-b547-c60d1c84775f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:19:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:42.033 103642 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f namespace which is not needed anymore#033[00m
Oct 13 12:19:42 np0005485008 nova_compute[192512]: 2025-10-13 16:19:42.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:42 np0005485008 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000022.scope: Deactivated successfully.
Oct 13 12:19:42 np0005485008 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000022.scope: Consumed 3.231s CPU time.
Oct 13 12:19:42 np0005485008 systemd-machined[152551]: Machine qemu-25-instance-00000022 terminated.
Oct 13 12:19:42 np0005485008 nova_compute[192512]: 2025-10-13 16:19:42.097 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:19:42 np0005485008 nova_compute[192512]: 2025-10-13 16:19:42.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:42 np0005485008 neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f[228488]: [NOTICE]   (228492) : haproxy version is 2.8.14-c23fe91
Oct 13 12:19:42 np0005485008 neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f[228488]: [NOTICE]   (228492) : path to executable is /usr/sbin/haproxy
Oct 13 12:19:42 np0005485008 neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f[228488]: [WARNING]  (228492) : Exiting Master process...
Oct 13 12:19:42 np0005485008 neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f[228488]: [WARNING]  (228492) : Exiting Master process...
Oct 13 12:19:42 np0005485008 nova_compute[192512]: 2025-10-13 16:19:42.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:42 np0005485008 neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f[228488]: [ALERT]    (228492) : Current worker (228494) exited with code 143 (Terminated)
Oct 13 12:19:42 np0005485008 neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f[228488]: [WARNING]  (228492) : All workers exited. Exiting... (0)
Oct 13 12:19:42 np0005485008 systemd[1]: libpod-694474f52ea57c7c1805e333cc233112d13b3d0b2e3238afacd0ab04430afff5.scope: Deactivated successfully.
Oct 13 12:19:42 np0005485008 podman[228634]: 2025-10-13 16:19:42.212840836 +0000 UTC m=+0.059417662 container died 694474f52ea57c7c1805e333cc233112d13b3d0b2e3238afacd0ab04430afff5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 12:19:42 np0005485008 nova_compute[192512]: 2025-10-13 16:19:42.247 2 INFO nova.virt.libvirt.driver [-] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Instance destroyed successfully.#033[00m
Oct 13 12:19:42 np0005485008 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-694474f52ea57c7c1805e333cc233112d13b3d0b2e3238afacd0ab04430afff5-userdata-shm.mount: Deactivated successfully.
Oct 13 12:19:42 np0005485008 nova_compute[192512]: 2025-10-13 16:19:42.248 2 DEBUG nova.objects.instance [None req-dd2bf4be-e2c0-46a2-934a-78f69d3ddf05 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Lazy-loading 'resources' on Instance uuid d2984f9e-f5e9-4b51-8567-7fa736f90221 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:19:42 np0005485008 systemd[1]: var-lib-containers-storage-overlay-a4ea7fa2966f0b264b885dfcf30c8290ee53368dfe1724ae0dc3edf00c9a0368-merged.mount: Deactivated successfully.
Oct 13 12:19:42 np0005485008 podman[228634]: 2025-10-13 16:19:42.259343744 +0000 UTC m=+0.105920570 container cleanup 694474f52ea57c7c1805e333cc233112d13b3d0b2e3238afacd0ab04430afff5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 12:19:42 np0005485008 nova_compute[192512]: 2025-10-13 16:19:42.269 2 DEBUG nova.virt.libvirt.vif [None req-dd2bf4be-e2c0-46a2-934a-78f69d3ddf05 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-13T16:18:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteWorkloadBalanceStrategy-server-848568148',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecuteworkloadbalancestrategy-server-848568148',id=34,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T16:18:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d49081b07bec47cb94825500c1227cc5',ramdisk_id='',reservation_id='r-r1ygpp79',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',clean_attempts='1',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteWorkloadBalanceStrategy-1005555467',owner_user_name='tempest-TestExecuteWorkloadBalanceStrategy-1005555467-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T16:19:38Z,user_data=None,user_id='885a697ea55c4785a65f452bcfd48f00',uuid=d2984f9e-f5e9-4b51-8567-7fa736f90221,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be", "address": "fa:16:3e:10:48:f9", "network": {"id": "7719b000-5e4d-4d3a-b708-5359f703a47f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-1375252188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f5441da6d849ffb52473a57a863f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6c7f79f-bc", "ovs_interfaceid": "e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 12:19:42 np0005485008 nova_compute[192512]: 2025-10-13 16:19:42.270 2 DEBUG nova.network.os_vif_util [None req-dd2bf4be-e2c0-46a2-934a-78f69d3ddf05 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Converting VIF {"id": "e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be", "address": "fa:16:3e:10:48:f9", "network": {"id": "7719b000-5e4d-4d3a-b708-5359f703a47f", "bridge": "br-int", "label": "tempest-TestExecuteWorkloadBalanceStrategy-1375252188-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f5441da6d849ffb52473a57a863f87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6c7f79f-bc", "ovs_interfaceid": "e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:19:42 np0005485008 nova_compute[192512]: 2025-10-13 16:19:42.272 2 DEBUG nova.network.os_vif_util [None req-dd2bf4be-e2c0-46a2-934a-78f69d3ddf05 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:10:48:f9,bridge_name='br-int',has_traffic_filtering=True,id=e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be,network=Network(7719b000-5e4d-4d3a-b708-5359f703a47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6c7f79f-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:19:42 np0005485008 nova_compute[192512]: 2025-10-13 16:19:42.272 2 DEBUG os_vif [None req-dd2bf4be-e2c0-46a2-934a-78f69d3ddf05 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:48:f9,bridge_name='br-int',has_traffic_filtering=True,id=e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be,network=Network(7719b000-5e4d-4d3a-b708-5359f703a47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6c7f79f-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 12:19:42 np0005485008 nova_compute[192512]: 2025-10-13 16:19:42.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:42 np0005485008 nova_compute[192512]: 2025-10-13 16:19:42.275 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape6c7f79f-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:19:42 np0005485008 systemd[1]: libpod-conmon-694474f52ea57c7c1805e333cc233112d13b3d0b2e3238afacd0ab04430afff5.scope: Deactivated successfully.
Oct 13 12:19:42 np0005485008 nova_compute[192512]: 2025-10-13 16:19:42.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:42 np0005485008 nova_compute[192512]: 2025-10-13 16:19:42.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:19:42 np0005485008 nova_compute[192512]: 2025-10-13 16:19:42.282 2 INFO os_vif [None req-dd2bf4be-e2c0-46a2-934a-78f69d3ddf05 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:48:f9,bridge_name='br-int',has_traffic_filtering=True,id=e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be,network=Network(7719b000-5e4d-4d3a-b708-5359f703a47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6c7f79f-bc')#033[00m
Oct 13 12:19:42 np0005485008 nova_compute[192512]: 2025-10-13 16:19:42.283 2 INFO nova.virt.libvirt.driver [None req-dd2bf4be-e2c0-46a2-934a-78f69d3ddf05 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Deleting instance files /var/lib/nova/instances/d2984f9e-f5e9-4b51-8567-7fa736f90221_del#033[00m
Oct 13 12:19:42 np0005485008 nova_compute[192512]: 2025-10-13 16:19:42.284 2 INFO nova.virt.libvirt.driver [None req-dd2bf4be-e2c0-46a2-934a-78f69d3ddf05 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Deletion of /var/lib/nova/instances/d2984f9e-f5e9-4b51-8567-7fa736f90221_del complete#033[00m
Oct 13 12:19:42 np0005485008 podman[228683]: 2025-10-13 16:19:42.334900738 +0000 UTC m=+0.046498379 container remove 694474f52ea57c7c1805e333cc233112d13b3d0b2e3238afacd0ab04430afff5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 12:19:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:42.340 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0644db-f367-45ac-a40c-8f61c700895d]: (4, ('Mon Oct 13 04:19:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f (694474f52ea57c7c1805e333cc233112d13b3d0b2e3238afacd0ab04430afff5)\n694474f52ea57c7c1805e333cc233112d13b3d0b2e3238afacd0ab04430afff5\nMon Oct 13 04:19:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f (694474f52ea57c7c1805e333cc233112d13b3d0b2e3238afacd0ab04430afff5)\n694474f52ea57c7c1805e333cc233112d13b3d0b2e3238afacd0ab04430afff5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:19:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:42.342 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[165d4529-c4fd-45c3-95a7-4c560cd45098]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:19:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:42.343 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7719b000-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:19:42 np0005485008 kernel: tap7719b000-50: left promiscuous mode
Oct 13 12:19:42 np0005485008 nova_compute[192512]: 2025-10-13 16:19:42.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:42 np0005485008 nova_compute[192512]: 2025-10-13 16:19:42.351 2 INFO nova.compute.manager [None req-dd2bf4be-e2c0-46a2-934a-78f69d3ddf05 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Took 0.38 seconds to destroy the instance on the hypervisor.#033[00m
Oct 13 12:19:42 np0005485008 nova_compute[192512]: 2025-10-13 16:19:42.351 2 DEBUG oslo.service.loopingcall [None req-dd2bf4be-e2c0-46a2-934a-78f69d3ddf05 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 13 12:19:42 np0005485008 nova_compute[192512]: 2025-10-13 16:19:42.351 2 DEBUG nova.compute.manager [-] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 13 12:19:42 np0005485008 nova_compute[192512]: 2025-10-13 16:19:42.352 2 DEBUG nova.network.neutron [-] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 13 12:19:42 np0005485008 nova_compute[192512]: 2025-10-13 16:19:42.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:42.364 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[7b71bad5-6722-44b4-9b9c-8b21668efc61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:19:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:42.393 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[a295cf4d-8f8f-47d4-88e5-da4f9317df18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:19:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:42.395 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[5c9cb0f6-21bb-4d25-9765-42bd0e02aba0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:19:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:42.412 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[550f3968-1819-4c97-9261-0d6ddec2190c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 590813, 'reachable_time': 38310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 228698, 'error': None, 'target': 'ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:19:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:42.416 103757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7719b000-5e4d-4d3a-b708-5359f703a47f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 13 12:19:42 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:19:42.417 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[dec69236-f9bf-4a85-a4be-501da209e3ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:19:42 np0005485008 systemd[1]: run-netns-ovnmeta\x2d7719b000\x2d5e4d\x2d4d3a\x2db708\x2d5359f703a47f.mount: Deactivated successfully.
Oct 13 12:19:42 np0005485008 nova_compute[192512]: 2025-10-13 16:19:42.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:43 np0005485008 nova_compute[192512]: 2025-10-13 16:19:43.123 2 DEBUG nova.compute.manager [req-1ef35441-91f0-456e-abd2-1bdd2abf021e req-d9b33b9f-b1f4-49ea-bdff-04d1c1b3c533 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Received event network-vif-unplugged-e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:19:43 np0005485008 nova_compute[192512]: 2025-10-13 16:19:43.125 2 DEBUG oslo_concurrency.lockutils [req-1ef35441-91f0-456e-abd2-1bdd2abf021e req-d9b33b9f-b1f4-49ea-bdff-04d1c1b3c533 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "d2984f9e-f5e9-4b51-8567-7fa736f90221-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:19:43 np0005485008 nova_compute[192512]: 2025-10-13 16:19:43.126 2 DEBUG oslo_concurrency.lockutils [req-1ef35441-91f0-456e-abd2-1bdd2abf021e req-d9b33b9f-b1f4-49ea-bdff-04d1c1b3c533 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "d2984f9e-f5e9-4b51-8567-7fa736f90221-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:19:43 np0005485008 nova_compute[192512]: 2025-10-13 16:19:43.126 2 DEBUG oslo_concurrency.lockutils [req-1ef35441-91f0-456e-abd2-1bdd2abf021e req-d9b33b9f-b1f4-49ea-bdff-04d1c1b3c533 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "d2984f9e-f5e9-4b51-8567-7fa736f90221-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:19:43 np0005485008 nova_compute[192512]: 2025-10-13 16:19:43.126 2 DEBUG nova.compute.manager [req-1ef35441-91f0-456e-abd2-1bdd2abf021e req-d9b33b9f-b1f4-49ea-bdff-04d1c1b3c533 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] No waiting events found dispatching network-vif-unplugged-e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:19:43 np0005485008 nova_compute[192512]: 2025-10-13 16:19:43.126 2 DEBUG nova.compute.manager [req-1ef35441-91f0-456e-abd2-1bdd2abf021e req-d9b33b9f-b1f4-49ea-bdff-04d1c1b3c533 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Received event network-vif-unplugged-e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 12:19:44 np0005485008 nova_compute[192512]: 2025-10-13 16:19:44.884 2 DEBUG nova.network.neutron [-] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:19:44 np0005485008 nova_compute[192512]: 2025-10-13 16:19:44.903 2 INFO nova.compute.manager [-] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Took 2.55 seconds to deallocate network for instance.#033[00m
Oct 13 12:19:44 np0005485008 nova_compute[192512]: 2025-10-13 16:19:44.984 2 DEBUG oslo_concurrency.lockutils [None req-dd2bf4be-e2c0-46a2-934a-78f69d3ddf05 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:19:44 np0005485008 nova_compute[192512]: 2025-10-13 16:19:44.985 2 DEBUG oslo_concurrency.lockutils [None req-dd2bf4be-e2c0-46a2-934a-78f69d3ddf05 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:19:45 np0005485008 nova_compute[192512]: 2025-10-13 16:19:45.077 2 DEBUG nova.compute.provider_tree [None req-dd2bf4be-e2c0-46a2-934a-78f69d3ddf05 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:19:45 np0005485008 nova_compute[192512]: 2025-10-13 16:19:45.107 2 DEBUG nova.scheduler.client.report [None req-dd2bf4be-e2c0-46a2-934a-78f69d3ddf05 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:19:45 np0005485008 nova_compute[192512]: 2025-10-13 16:19:45.135 2 DEBUG oslo_concurrency.lockutils [None req-dd2bf4be-e2c0-46a2-934a-78f69d3ddf05 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:19:45 np0005485008 nova_compute[192512]: 2025-10-13 16:19:45.188 2 INFO nova.scheduler.client.report [None req-dd2bf4be-e2c0-46a2-934a-78f69d3ddf05 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Deleted allocations for instance d2984f9e-f5e9-4b51-8567-7fa736f90221#033[00m
Oct 13 12:19:45 np0005485008 nova_compute[192512]: 2025-10-13 16:19:45.259 2 DEBUG nova.compute.manager [req-9606a81c-5e95-4c0c-af36-da9995cba855 req-a7190894-369f-4a98-9311-9ba25f0db536 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Received event network-vif-plugged-e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:19:45 np0005485008 nova_compute[192512]: 2025-10-13 16:19:45.260 2 DEBUG oslo_concurrency.lockutils [req-9606a81c-5e95-4c0c-af36-da9995cba855 req-a7190894-369f-4a98-9311-9ba25f0db536 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "d2984f9e-f5e9-4b51-8567-7fa736f90221-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:19:45 np0005485008 nova_compute[192512]: 2025-10-13 16:19:45.260 2 DEBUG oslo_concurrency.lockutils [req-9606a81c-5e95-4c0c-af36-da9995cba855 req-a7190894-369f-4a98-9311-9ba25f0db536 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "d2984f9e-f5e9-4b51-8567-7fa736f90221-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:19:45 np0005485008 nova_compute[192512]: 2025-10-13 16:19:45.261 2 DEBUG oslo_concurrency.lockutils [req-9606a81c-5e95-4c0c-af36-da9995cba855 req-a7190894-369f-4a98-9311-9ba25f0db536 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "d2984f9e-f5e9-4b51-8567-7fa736f90221-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:19:45 np0005485008 nova_compute[192512]: 2025-10-13 16:19:45.261 2 DEBUG nova.compute.manager [req-9606a81c-5e95-4c0c-af36-da9995cba855 req-a7190894-369f-4a98-9311-9ba25f0db536 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] No waiting events found dispatching network-vif-plugged-e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:19:45 np0005485008 nova_compute[192512]: 2025-10-13 16:19:45.262 2 WARNING nova.compute.manager [req-9606a81c-5e95-4c0c-af36-da9995cba855 req-a7190894-369f-4a98-9311-9ba25f0db536 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Received unexpected event network-vif-plugged-e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be for instance with vm_state deleted and task_state None.#033[00m
Oct 13 12:19:45 np0005485008 nova_compute[192512]: 2025-10-13 16:19:45.263 2 DEBUG nova.compute.manager [req-9606a81c-5e95-4c0c-af36-da9995cba855 req-a7190894-369f-4a98-9311-9ba25f0db536 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Received event network-vif-deleted-e6c7f79f-bc1b-4bb5-8d9a-a216de6a53be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:19:45 np0005485008 nova_compute[192512]: 2025-10-13 16:19:45.298 2 DEBUG oslo_concurrency.lockutils [None req-dd2bf4be-e2c0-46a2-934a-78f69d3ddf05 885a697ea55c4785a65f452bcfd48f00 d49081b07bec47cb94825500c1227cc5 - - default default] Lock "d2984f9e-f5e9-4b51-8567-7fa736f90221" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.336s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:19:46 np0005485008 podman[228699]: 2025-10-13 16:19:46.795193959 +0000 UTC m=+0.084177893 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41)
Oct 13 12:19:47 np0005485008 nova_compute[192512]: 2025-10-13 16:19:47.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:47 np0005485008 nova_compute[192512]: 2025-10-13 16:19:47.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:19:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:19:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:19:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:19:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:19:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:19:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:19:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:19:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:19:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:19:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:19:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:19:50 np0005485008 nova_compute[192512]: 2025-10-13 16:19:50.424 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:19:52 np0005485008 nova_compute[192512]: 2025-10-13 16:19:52.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:52 np0005485008 nova_compute[192512]: 2025-10-13 16:19:52.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:57 np0005485008 nova_compute[192512]: 2025-10-13 16:19:57.245 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760372382.2430577, d2984f9e-f5e9-4b51-8567-7fa736f90221 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:19:57 np0005485008 nova_compute[192512]: 2025-10-13 16:19:57.245 2 INFO nova.compute.manager [-] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] VM Stopped (Lifecycle Event)#033[00m
Oct 13 12:19:57 np0005485008 nova_compute[192512]: 2025-10-13 16:19:57.267 2 DEBUG nova.compute.manager [None req-03d4068e-6f1c-4bf3-a5e0-fb62df71172a - - - - - -] [instance: d2984f9e-f5e9-4b51-8567-7fa736f90221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:19:57 np0005485008 nova_compute[192512]: 2025-10-13 16:19:57.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:19:57 np0005485008 nova_compute[192512]: 2025-10-13 16:19:57.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:20:02 np0005485008 nova_compute[192512]: 2025-10-13 16:20:02.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:20:02 np0005485008 nova_compute[192512]: 2025-10-13 16:20:02.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:20:03 np0005485008 podman[228720]: 2025-10-13 16:20:03.79466903 +0000 UTC m=+0.087240398 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 13 12:20:03 np0005485008 podman[228728]: 2025-10-13 16:20:03.795919709 +0000 UTC m=+0.068972269 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 12:20:03 np0005485008 podman[228721]: 2025-10-13 16:20:03.810772832 +0000 UTC m=+0.089604463 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 12:20:03 np0005485008 podman[228722]: 2025-10-13 16:20:03.857637241 +0000 UTC m=+0.127950276 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 13 12:20:03 np0005485008 podman[228729]: 2025-10-13 16:20:03.876086525 +0000 UTC m=+0.139388762 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Oct 13 12:20:05 np0005485008 podman[202884]: time="2025-10-13T16:20:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:20:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:20:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:20:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:20:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3010 "" "Go-http-client/1.1"
Oct 13 12:20:07 np0005485008 nova_compute[192512]: 2025-10-13 16:20:07.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:20:07 np0005485008 nova_compute[192512]: 2025-10-13 16:20:07.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:20:12 np0005485008 nova_compute[192512]: 2025-10-13 16:20:12.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:20:12 np0005485008 nova_compute[192512]: 2025-10-13 16:20:12.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:20:17 np0005485008 nova_compute[192512]: 2025-10-13 16:20:17.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:20:17 np0005485008 podman[228822]: 2025-10-13 16:20:17.80525786 +0000 UTC m=+0.102131023 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9-minimal, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 13 12:20:17 np0005485008 nova_compute[192512]: 2025-10-13 16:20:17.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:20:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:20:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:20:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:20:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:20:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:20:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:20:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:20:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:20:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:20:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:20:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:20:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:20:22 np0005485008 nova_compute[192512]: 2025-10-13 16:20:22.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:20:22 np0005485008 nova_compute[192512]: 2025-10-13 16:20:22.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:20:25 np0005485008 ovn_controller[94758]: 2025-10-13T16:20:25Z|00308|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Oct 13 12:20:27 np0005485008 nova_compute[192512]: 2025-10-13 16:20:27.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:20:27 np0005485008 nova_compute[192512]: 2025-10-13 16:20:27.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:20:32 np0005485008 nova_compute[192512]: 2025-10-13 16:20:32.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:20:32 np0005485008 nova_compute[192512]: 2025-10-13 16:20:32.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:20:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:20:33.990 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:20:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:20:33.991 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:20:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:20:33.991 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:20:34 np0005485008 nova_compute[192512]: 2025-10-13 16:20:34.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:20:34 np0005485008 nova_compute[192512]: 2025-10-13 16:20:34.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:20:34 np0005485008 podman[228847]: 2025-10-13 16:20:34.774638842 +0000 UTC m=+0.063367986 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 13 12:20:34 np0005485008 podman[228848]: 2025-10-13 16:20:34.78548974 +0000 UTC m=+0.071329744 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 12:20:34 np0005485008 podman[228846]: 2025-10-13 16:20:34.793745117 +0000 UTC m=+0.081826960 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 12:20:34 np0005485008 podman[228845]: 2025-10-13 16:20:34.829061836 +0000 UTC m=+0.125713406 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 13 12:20:34 np0005485008 podman[228854]: 2025-10-13 16:20:34.837732757 +0000 UTC m=+0.119241915 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 12:20:35 np0005485008 nova_compute[192512]: 2025-10-13 16:20:35.424 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:20:35 np0005485008 podman[202884]: time="2025-10-13T16:20:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:20:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:20:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:20:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:20:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3012 "" "Go-http-client/1.1"
Oct 13 12:20:37 np0005485008 nova_compute[192512]: 2025-10-13 16:20:37.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:20:37 np0005485008 nova_compute[192512]: 2025-10-13 16:20:37.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:20:37 np0005485008 nova_compute[192512]: 2025-10-13 16:20:37.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:20:38 np0005485008 nova_compute[192512]: 2025-10-13 16:20:38.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:20:38 np0005485008 nova_compute[192512]: 2025-10-13 16:20:38.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:20:39 np0005485008 nova_compute[192512]: 2025-10-13 16:20:39.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:20:39 np0005485008 nova_compute[192512]: 2025-10-13 16:20:39.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:20:39 np0005485008 nova_compute[192512]: 2025-10-13 16:20:39.430 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:20:39 np0005485008 nova_compute[192512]: 2025-10-13 16:20:39.462 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:20:41 np0005485008 nova_compute[192512]: 2025-10-13 16:20:41.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:20:41 np0005485008 nova_compute[192512]: 2025-10-13 16:20:41.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:20:41 np0005485008 nova_compute[192512]: 2025-10-13 16:20:41.639 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:20:41 np0005485008 nova_compute[192512]: 2025-10-13 16:20:41.640 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:20:41 np0005485008 nova_compute[192512]: 2025-10-13 16:20:41.640 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:20:41 np0005485008 nova_compute[192512]: 2025-10-13 16:20:41.641 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:20:41 np0005485008 nova_compute[192512]: 2025-10-13 16:20:41.828 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:20:41 np0005485008 nova_compute[192512]: 2025-10-13 16:20:41.830 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5873MB free_disk=73.46311569213867GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:20:41 np0005485008 nova_compute[192512]: 2025-10-13 16:20:41.830 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:20:41 np0005485008 nova_compute[192512]: 2025-10-13 16:20:41.830 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:20:41 np0005485008 nova_compute[192512]: 2025-10-13 16:20:41.914 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:20:41 np0005485008 nova_compute[192512]: 2025-10-13 16:20:41.915 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:20:41 np0005485008 nova_compute[192512]: 2025-10-13 16:20:41.946 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:20:41 np0005485008 nova_compute[192512]: 2025-10-13 16:20:41.978 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:20:42 np0005485008 nova_compute[192512]: 2025-10-13 16:20:42.003 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:20:42 np0005485008 nova_compute[192512]: 2025-10-13 16:20:42.003 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:20:42 np0005485008 nova_compute[192512]: 2025-10-13 16:20:42.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:20:42 np0005485008 nova_compute[192512]: 2025-10-13 16:20:42.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:20:43 np0005485008 nova_compute[192512]: 2025-10-13 16:20:43.002 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:20:47 np0005485008 nova_compute[192512]: 2025-10-13 16:20:47.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:20:47 np0005485008 nova_compute[192512]: 2025-10-13 16:20:47.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:20:48 np0005485008 podman[228950]: 2025-10-13 16:20:48.78842803 +0000 UTC m=+0.080794697 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7)
Oct 13 12:20:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:20:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:20:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:20:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:20:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:20:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:20:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:20:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:20:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:20:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:20:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:20:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:20:52 np0005485008 nova_compute[192512]: 2025-10-13 16:20:52.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:20:52 np0005485008 nova_compute[192512]: 2025-10-13 16:20:52.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:20:54 np0005485008 nova_compute[192512]: 2025-10-13 16:20:54.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:20:57 np0005485008 nova_compute[192512]: 2025-10-13 16:20:57.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:20:57 np0005485008 nova_compute[192512]: 2025-10-13 16:20:57.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:01 np0005485008 nova_compute[192512]: 2025-10-13 16:21:01.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:21:01.380 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:21:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:21:01.383 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 12:21:02 np0005485008 nova_compute[192512]: 2025-10-13 16:21:02.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:02 np0005485008 nova_compute[192512]: 2025-10-13 16:21:02.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:03 np0005485008 nova_compute[192512]: 2025-10-13 16:21:03.461 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:21:03 np0005485008 nova_compute[192512]: 2025-10-13 16:21:03.462 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 13 12:21:03 np0005485008 nova_compute[192512]: 2025-10-13 16:21:03.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:04 np0005485008 nova_compute[192512]: 2025-10-13 16:21:04.445 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:21:04 np0005485008 nova_compute[192512]: 2025-10-13 16:21:04.445 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 13 12:21:04 np0005485008 nova_compute[192512]: 2025-10-13 16:21:04.466 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 13 12:21:05 np0005485008 podman[202884]: time="2025-10-13T16:21:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:21:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:21:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:21:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:21:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3013 "" "Go-http-client/1.1"
Oct 13 12:21:05 np0005485008 podman[228971]: 2025-10-13 16:21:05.777805883 +0000 UTC m=+0.079568959 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 13 12:21:05 np0005485008 podman[228972]: 2025-10-13 16:21:05.797235478 +0000 UTC m=+0.088583940 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid)
Oct 13 12:21:05 np0005485008 podman[228973]: 2025-10-13 16:21:05.80597146 +0000 UTC m=+0.096339381 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 13 12:21:05 np0005485008 podman[228980]: 2025-10-13 16:21:05.806775816 +0000 UTC m=+0.091775080 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 13 12:21:05 np0005485008 podman[228974]: 2025-10-13 16:21:05.807416576 +0000 UTC m=+0.098332874 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 12:21:07 np0005485008 nova_compute[192512]: 2025-10-13 16:21:07.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:07 np0005485008 nova_compute[192512]: 2025-10-13 16:21:07.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:10 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:21:10.385 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:21:12 np0005485008 nova_compute[192512]: 2025-10-13 16:21:12.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:12 np0005485008 nova_compute[192512]: 2025-10-13 16:21:12.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:17 np0005485008 nova_compute[192512]: 2025-10-13 16:21:17.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:17 np0005485008 nova_compute[192512]: 2025-10-13 16:21:17.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:21:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:21:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:21:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:21:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:21:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:21:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:21:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:21:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:21:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:21:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:21:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:21:19 np0005485008 podman[229077]: 2025-10-13 16:21:19.761055563 +0000 UTC m=+0.068766983 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41)
Oct 13 12:21:22 np0005485008 nova_compute[192512]: 2025-10-13 16:21:22.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:22 np0005485008 nova_compute[192512]: 2025-10-13 16:21:22.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:27 np0005485008 nova_compute[192512]: 2025-10-13 16:21:27.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:27 np0005485008 nova_compute[192512]: 2025-10-13 16:21:27.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:28 np0005485008 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 13 12:21:32 np0005485008 nova_compute[192512]: 2025-10-13 16:21:32.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:32 np0005485008 nova_compute[192512]: 2025-10-13 16:21:32.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:21:33.991 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:21:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:21:33.992 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:21:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:21:33.992 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:21:35 np0005485008 nova_compute[192512]: 2025-10-13 16:21:35.449 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:21:35 np0005485008 nova_compute[192512]: 2025-10-13 16:21:35.450 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:21:35 np0005485008 podman[202884]: time="2025-10-13T16:21:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:21:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:21:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:21:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:21:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3014 "" "Go-http-client/1.1"
Oct 13 12:21:36 np0005485008 nova_compute[192512]: 2025-10-13 16:21:36.424 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:21:36 np0005485008 podman[229101]: 2025-10-13 16:21:36.796579634 +0000 UTC m=+0.092367919 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 12:21:36 np0005485008 podman[229103]: 2025-10-13 16:21:36.802153508 +0000 UTC m=+0.079759636 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 12:21:36 np0005485008 podman[229109]: 2025-10-13 16:21:36.81092428 +0000 UTC m=+0.081021445 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 12:21:36 np0005485008 podman[229112]: 2025-10-13 16:21:36.823077679 +0000 UTC m=+0.101209634 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 12:21:36 np0005485008 podman[229102]: 2025-10-13 16:21:36.830028716 +0000 UTC m=+0.119414741 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 13 12:21:37 np0005485008 nova_compute[192512]: 2025-10-13 16:21:37.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:37 np0005485008 ovn_controller[94758]: 2025-10-13T16:21:37Z|00309|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 13 12:21:37 np0005485008 nova_compute[192512]: 2025-10-13 16:21:37.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:38 np0005485008 nova_compute[192512]: 2025-10-13 16:21:38.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:21:38 np0005485008 nova_compute[192512]: 2025-10-13 16:21:38.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:21:39 np0005485008 nova_compute[192512]: 2025-10-13 16:21:39.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:21:39 np0005485008 nova_compute[192512]: 2025-10-13 16:21:39.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:21:39 np0005485008 nova_compute[192512]: 2025-10-13 16:21:39.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:21:39 np0005485008 nova_compute[192512]: 2025-10-13 16:21:39.452 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:21:40 np0005485008 nova_compute[192512]: 2025-10-13 16:21:40.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:21:41 np0005485008 nova_compute[192512]: 2025-10-13 16:21:41.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:21:41 np0005485008 nova_compute[192512]: 2025-10-13 16:21:41.452 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:21:41 np0005485008 nova_compute[192512]: 2025-10-13 16:21:41.453 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:21:41 np0005485008 nova_compute[192512]: 2025-10-13 16:21:41.453 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:21:41 np0005485008 nova_compute[192512]: 2025-10-13 16:21:41.454 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:21:41 np0005485008 nova_compute[192512]: 2025-10-13 16:21:41.687 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:21:41 np0005485008 nova_compute[192512]: 2025-10-13 16:21:41.688 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5865MB free_disk=73.46311569213867GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:21:41 np0005485008 nova_compute[192512]: 2025-10-13 16:21:41.688 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:21:41 np0005485008 nova_compute[192512]: 2025-10-13 16:21:41.689 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:21:41 np0005485008 nova_compute[192512]: 2025-10-13 16:21:41.866 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:21:41 np0005485008 nova_compute[192512]: 2025-10-13 16:21:41.867 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:21:41 np0005485008 nova_compute[192512]: 2025-10-13 16:21:41.894 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:21:41 np0005485008 nova_compute[192512]: 2025-10-13 16:21:41.911 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:21:41 np0005485008 nova_compute[192512]: 2025-10-13 16:21:41.914 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:21:41 np0005485008 nova_compute[192512]: 2025-10-13 16:21:41.914 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:21:42 np0005485008 nova_compute[192512]: 2025-10-13 16:21:42.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:42 np0005485008 nova_compute[192512]: 2025-10-13 16:21:42.914 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:21:42 np0005485008 nova_compute[192512]: 2025-10-13 16:21:42.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:43 np0005485008 nova_compute[192512]: 2025-10-13 16:21:43.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:21:47 np0005485008 nova_compute[192512]: 2025-10-13 16:21:47.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:47 np0005485008 nova_compute[192512]: 2025-10-13 16:21:47.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:21:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:21:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:21:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:21:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:21:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:21:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:21:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:21:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:21:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:21:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:21:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:21:50 np0005485008 podman[229205]: 2025-10-13 16:21:50.783279229 +0000 UTC m=+0.074596214 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, name=ubi9-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc.)
Oct 13 12:21:52 np0005485008 nova_compute[192512]: 2025-10-13 16:21:52.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:53 np0005485008 nova_compute[192512]: 2025-10-13 16:21:53.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:54 np0005485008 nova_compute[192512]: 2025-10-13 16:21:54.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:21:57 np0005485008 nova_compute[192512]: 2025-10-13 16:21:57.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:21:58 np0005485008 nova_compute[192512]: 2025-10-13 16:21:58.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:22:02 np0005485008 nova_compute[192512]: 2025-10-13 16:22:02.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:22:03 np0005485008 nova_compute[192512]: 2025-10-13 16:22:03.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:22:05 np0005485008 podman[202884]: time="2025-10-13T16:22:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:22:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:22:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:22:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:22:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3014 "" "Go-http-client/1.1"
Oct 13 12:22:07 np0005485008 nova_compute[192512]: 2025-10-13 16:22:07.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:22:07 np0005485008 podman[229227]: 2025-10-13 16:22:07.771840765 +0000 UTC m=+0.069498296 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 12:22:07 np0005485008 podman[229234]: 2025-10-13 16:22:07.802912183 +0000 UTC m=+0.082241764 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 12:22:07 np0005485008 podman[229229]: 2025-10-13 16:22:07.819579932 +0000 UTC m=+0.100359077 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Oct 13 12:22:07 np0005485008 podman[229235]: 2025-10-13 16:22:07.834506597 +0000 UTC m=+0.113019572 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 13 12:22:07 np0005485008 podman[229228]: 2025-10-13 16:22:07.836166588 +0000 UTC m=+0.115560110 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3)
Oct 13 12:22:08 np0005485008 nova_compute[192512]: 2025-10-13 16:22:08.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:22:12 np0005485008 nova_compute[192512]: 2025-10-13 16:22:12.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:22:13 np0005485008 nova_compute[192512]: 2025-10-13 16:22:13.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:22:17 np0005485008 nova_compute[192512]: 2025-10-13 16:22:17.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:22:18 np0005485008 nova_compute[192512]: 2025-10-13 16:22:18.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:22:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:22:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:22:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:22:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:22:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:22:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:22:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:22:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:22:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:22:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:22:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:22:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:22:21 np0005485008 podman[229330]: 2025-10-13 16:22:21.773448294 +0000 UTC m=+0.066112360 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, name=ubi9-minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-type=git, release=1755695350, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9)
Oct 13 12:22:22 np0005485008 nova_compute[192512]: 2025-10-13 16:22:22.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:22:23 np0005485008 nova_compute[192512]: 2025-10-13 16:22:23.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:22:27 np0005485008 nova_compute[192512]: 2025-10-13 16:22:27.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:22:28 np0005485008 nova_compute[192512]: 2025-10-13 16:22:28.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:22:32 np0005485008 nova_compute[192512]: 2025-10-13 16:22:32.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:22:33 np0005485008 nova_compute[192512]: 2025-10-13 16:22:33.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:22:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:22:33.992 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:22:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:22:33.993 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:22:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:22:33.993 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:22:35 np0005485008 nova_compute[192512]: 2025-10-13 16:22:35.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:22:35 np0005485008 nova_compute[192512]: 2025-10-13 16:22:35.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:22:35 np0005485008 podman[202884]: time="2025-10-13T16:22:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:22:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:22:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:22:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:22:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3005 "" "Go-http-client/1.1"
Oct 13 12:22:36 np0005485008 nova_compute[192512]: 2025-10-13 16:22:36.424 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:22:37 np0005485008 nova_compute[192512]: 2025-10-13 16:22:37.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:22:38 np0005485008 nova_compute[192512]: 2025-10-13 16:22:38.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:22:38 np0005485008 podman[229352]: 2025-10-13 16:22:38.767379781 +0000 UTC m=+0.058670878 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid)
Oct 13 12:22:38 np0005485008 podman[229351]: 2025-10-13 16:22:38.772389407 +0000 UTC m=+0.060367282 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 12:22:38 np0005485008 podman[229353]: 2025-10-13 16:22:38.780080526 +0000 UTC m=+0.062185478 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 12:22:38 np0005485008 podman[229354]: 2025-10-13 16:22:38.780479549 +0000 UTC m=+0.057324137 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 12:22:38 np0005485008 podman[229355]: 2025-10-13 16:22:38.811411582 +0000 UTC m=+0.084982158 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 12:22:39 np0005485008 nova_compute[192512]: 2025-10-13 16:22:39.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:22:39 np0005485008 nova_compute[192512]: 2025-10-13 16:22:39.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:22:41 np0005485008 nova_compute[192512]: 2025-10-13 16:22:41.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:22:41 np0005485008 nova_compute[192512]: 2025-10-13 16:22:41.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:22:41 np0005485008 nova_compute[192512]: 2025-10-13 16:22:41.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:22:41 np0005485008 nova_compute[192512]: 2025-10-13 16:22:41.450 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:22:42 np0005485008 nova_compute[192512]: 2025-10-13 16:22:42.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:22:42 np0005485008 nova_compute[192512]: 2025-10-13 16:22:42.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:22:42 np0005485008 nova_compute[192512]: 2025-10-13 16:22:42.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:22:43 np0005485008 nova_compute[192512]: 2025-10-13 16:22:43.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:22:43 np0005485008 nova_compute[192512]: 2025-10-13 16:22:43.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:22:43 np0005485008 nova_compute[192512]: 2025-10-13 16:22:43.454 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:22:43 np0005485008 nova_compute[192512]: 2025-10-13 16:22:43.455 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:22:43 np0005485008 nova_compute[192512]: 2025-10-13 16:22:43.455 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:22:43 np0005485008 nova_compute[192512]: 2025-10-13 16:22:43.455 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:22:43 np0005485008 nova_compute[192512]: 2025-10-13 16:22:43.682 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:22:43 np0005485008 nova_compute[192512]: 2025-10-13 16:22:43.684 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5877MB free_disk=73.46311569213867GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:22:43 np0005485008 nova_compute[192512]: 2025-10-13 16:22:43.684 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:22:43 np0005485008 nova_compute[192512]: 2025-10-13 16:22:43.684 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:22:43 np0005485008 nova_compute[192512]: 2025-10-13 16:22:43.756 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:22:43 np0005485008 nova_compute[192512]: 2025-10-13 16:22:43.757 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:22:43 np0005485008 nova_compute[192512]: 2025-10-13 16:22:43.835 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:22:43 np0005485008 nova_compute[192512]: 2025-10-13 16:22:43.849 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:22:43 np0005485008 nova_compute[192512]: 2025-10-13 16:22:43.852 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:22:43 np0005485008 nova_compute[192512]: 2025-10-13 16:22:43.853 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:22:45 np0005485008 nova_compute[192512]: 2025-10-13 16:22:45.853 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:22:47 np0005485008 nova_compute[192512]: 2025-10-13 16:22:47.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:22:48 np0005485008 nova_compute[192512]: 2025-10-13 16:22:48.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:22:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:22:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:22:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:22:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:22:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:22:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:22:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:22:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:22:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:22:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:22:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:22:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:22:52 np0005485008 nova_compute[192512]: 2025-10-13 16:22:52.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:22:52 np0005485008 podman[229454]: 2025-10-13 16:22:52.753309242 +0000 UTC m=+0.056861033 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, container_name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, managed_by=edpm_ansible, io.openshift.expose-services=, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, architecture=x86_64)
Oct 13 12:22:53 np0005485008 nova_compute[192512]: 2025-10-13 16:22:53.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:22:54 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:22:54.253 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:22:54 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:22:54.254 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 12:22:54 np0005485008 nova_compute[192512]: 2025-10-13 16:22:54.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:22:57 np0005485008 nova_compute[192512]: 2025-10-13 16:22:57.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:22:58 np0005485008 nova_compute[192512]: 2025-10-13 16:22:58.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:23:02 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:23:02.257 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:23:02 np0005485008 nova_compute[192512]: 2025-10-13 16:23:02.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:23:03 np0005485008 nova_compute[192512]: 2025-10-13 16:23:03.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:23:05 np0005485008 podman[202884]: time="2025-10-13T16:23:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:23:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:23:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:23:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:23:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3010 "" "Go-http-client/1.1"
Oct 13 12:23:07 np0005485008 nova_compute[192512]: 2025-10-13 16:23:07.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:23:08 np0005485008 nova_compute[192512]: 2025-10-13 16:23:08.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:23:09 np0005485008 podman[229476]: 2025-10-13 16:23:09.756442343 +0000 UTC m=+0.059414910 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 12:23:09 np0005485008 podman[229477]: 2025-10-13 16:23:09.762341777 +0000 UTC m=+0.061823886 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.vendor=CentOS)
Oct 13 12:23:09 np0005485008 podman[229479]: 2025-10-13 16:23:09.763491514 +0000 UTC m=+0.055468629 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 12:23:09 np0005485008 podman[229478]: 2025-10-13 16:23:09.771907066 +0000 UTC m=+0.059929668 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 13 12:23:09 np0005485008 podman[229484]: 2025-10-13 16:23:09.841260376 +0000 UTC m=+0.130327430 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Oct 13 12:23:12 np0005485008 nova_compute[192512]: 2025-10-13 16:23:12.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:23:13 np0005485008 nova_compute[192512]: 2025-10-13 16:23:13.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:23:17 np0005485008 nova_compute[192512]: 2025-10-13 16:23:17.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:23:18 np0005485008 nova_compute[192512]: 2025-10-13 16:23:18.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:23:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:23:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:23:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:23:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:23:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:23:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:23:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:23:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:23:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:23:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:23:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:23:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:23:22 np0005485008 nova_compute[192512]: 2025-10-13 16:23:22.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:23:23 np0005485008 nova_compute[192512]: 2025-10-13 16:23:23.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:23:23 np0005485008 podman[229583]: 2025-10-13 16:23:23.778678426 +0000 UTC m=+0.077267318 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, architecture=x86_64, name=ubi9-minimal, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350)
Oct 13 12:23:27 np0005485008 nova_compute[192512]: 2025-10-13 16:23:27.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:23:28 np0005485008 nova_compute[192512]: 2025-10-13 16:23:28.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:23:32 np0005485008 nova_compute[192512]: 2025-10-13 16:23:32.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:23:33 np0005485008 nova_compute[192512]: 2025-10-13 16:23:33.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:23:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:23:33.993 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:23:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:23:33.994 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:23:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:23:33.994 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:23:35 np0005485008 podman[202884]: time="2025-10-13T16:23:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:23:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:23:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:23:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:23:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3008 "" "Go-http-client/1.1"
Oct 13 12:23:36 np0005485008 nova_compute[192512]: 2025-10-13 16:23:36.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:23:36 np0005485008 nova_compute[192512]: 2025-10-13 16:23:36.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:23:37 np0005485008 nova_compute[192512]: 2025-10-13 16:23:37.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:23:37 np0005485008 nova_compute[192512]: 2025-10-13 16:23:37.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:23:38 np0005485008 nova_compute[192512]: 2025-10-13 16:23:38.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:23:39 np0005485008 nova_compute[192512]: 2025-10-13 16:23:39.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:23:40 np0005485008 nova_compute[192512]: 2025-10-13 16:23:40.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:23:40 np0005485008 podman[229606]: 2025-10-13 16:23:40.772853291 +0000 UTC m=+0.072079136 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid)
Oct 13 12:23:40 np0005485008 podman[229611]: 2025-10-13 16:23:40.781069258 +0000 UTC m=+0.066347249 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 12:23:40 np0005485008 podman[229605]: 2025-10-13 16:23:40.787560189 +0000 UTC m=+0.088987292 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 12:23:40 np0005485008 podman[229607]: 2025-10-13 16:23:40.817800451 +0000 UTC m=+0.108451039 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 12:23:40 np0005485008 podman[229619]: 2025-10-13 16:23:40.829512786 +0000 UTC m=+0.114567150 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 13 12:23:42 np0005485008 nova_compute[192512]: 2025-10-13 16:23:42.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:23:42 np0005485008 nova_compute[192512]: 2025-10-13 16:23:42.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:23:42 np0005485008 nova_compute[192512]: 2025-10-13 16:23:42.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:23:42 np0005485008 nova_compute[192512]: 2025-10-13 16:23:42.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:23:43 np0005485008 nova_compute[192512]: 2025-10-13 16:23:43.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:23:43 np0005485008 nova_compute[192512]: 2025-10-13 16:23:43.307 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:23:44 np0005485008 nova_compute[192512]: 2025-10-13 16:23:44.426 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:23:44 np0005485008 nova_compute[192512]: 2025-10-13 16:23:44.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:23:45 np0005485008 nova_compute[192512]: 2025-10-13 16:23:45.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:23:45 np0005485008 nova_compute[192512]: 2025-10-13 16:23:45.468 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:23:45 np0005485008 nova_compute[192512]: 2025-10-13 16:23:45.468 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:23:45 np0005485008 nova_compute[192512]: 2025-10-13 16:23:45.469 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:23:45 np0005485008 nova_compute[192512]: 2025-10-13 16:23:45.469 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:23:45 np0005485008 nova_compute[192512]: 2025-10-13 16:23:45.619 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:23:45 np0005485008 nova_compute[192512]: 2025-10-13 16:23:45.620 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5879MB free_disk=73.46311569213867GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:23:45 np0005485008 nova_compute[192512]: 2025-10-13 16:23:45.620 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:23:45 np0005485008 nova_compute[192512]: 2025-10-13 16:23:45.620 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:23:45 np0005485008 nova_compute[192512]: 2025-10-13 16:23:45.718 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:23:45 np0005485008 nova_compute[192512]: 2025-10-13 16:23:45.719 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:23:45 np0005485008 nova_compute[192512]: 2025-10-13 16:23:45.736 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing inventories for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 13 12:23:45 np0005485008 nova_compute[192512]: 2025-10-13 16:23:45.754 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating ProviderTree inventory for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 13 12:23:45 np0005485008 nova_compute[192512]: 2025-10-13 16:23:45.755 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating inventory in ProviderTree for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 13 12:23:45 np0005485008 nova_compute[192512]: 2025-10-13 16:23:45.775 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing aggregate associations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 13 12:23:45 np0005485008 nova_compute[192512]: 2025-10-13 16:23:45.803 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing trait associations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce, traits: HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,HW_CPU_X86_SSE42,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,HW_CPU_X86_BMI2,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 13 12:23:45 np0005485008 nova_compute[192512]: 2025-10-13 16:23:45.821 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:23:45 np0005485008 nova_compute[192512]: 2025-10-13 16:23:45.836 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:23:45 np0005485008 nova_compute[192512]: 2025-10-13 16:23:45.838 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:23:45 np0005485008 nova_compute[192512]: 2025-10-13 16:23:45.838 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:23:46 np0005485008 nova_compute[192512]: 2025-10-13 16:23:46.838 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:23:47 np0005485008 nova_compute[192512]: 2025-10-13 16:23:47.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:23:48 np0005485008 nova_compute[192512]: 2025-10-13 16:23:48.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:23:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:23:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:23:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:23:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:23:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:23:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:23:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:23:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:23:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:23:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:23:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:23:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:23:52 np0005485008 nova_compute[192512]: 2025-10-13 16:23:52.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:23:53 np0005485008 nova_compute[192512]: 2025-10-13 16:23:53.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:23:54 np0005485008 podman[229707]: 2025-10-13 16:23:54.758927429 +0000 UTC m=+0.062513589 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-type=git, release=1755695350, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter)
Oct 13 12:23:57 np0005485008 nova_compute[192512]: 2025-10-13 16:23:57.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:23:58 np0005485008 nova_compute[192512]: 2025-10-13 16:23:58.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:23:59 np0005485008 nova_compute[192512]: 2025-10-13 16:23:59.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:24:01 np0005485008 nova_compute[192512]: 2025-10-13 16:24:01.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:24:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:24:01.693 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:24:01 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:24:01.695 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 12:24:02 np0005485008 nova_compute[192512]: 2025-10-13 16:24:02.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:24:03 np0005485008 nova_compute[192512]: 2025-10-13 16:24:03.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:24:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:24:03.697 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:24:05 np0005485008 podman[202884]: time="2025-10-13T16:24:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:24:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:24:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:24:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:24:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3006 "" "Go-http-client/1.1"
Oct 13 12:24:07 np0005485008 nova_compute[192512]: 2025-10-13 16:24:07.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:24:08 np0005485008 nova_compute[192512]: 2025-10-13 16:24:08.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:24:11 np0005485008 podman[229732]: 2025-10-13 16:24:11.778614906 +0000 UTC m=+0.074882293 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 13 12:24:11 np0005485008 podman[229733]: 2025-10-13 16:24:11.794409359 +0000 UTC m=+0.083278266 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 13 12:24:11 np0005485008 podman[229734]: 2025-10-13 16:24:11.798918489 +0000 UTC m=+0.087631171 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 13 12:24:11 np0005485008 podman[229735]: 2025-10-13 16:24:11.801330534 +0000 UTC m=+0.075616807 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 12:24:11 np0005485008 podman[229741]: 2025-10-13 16:24:11.821817242 +0000 UTC m=+0.100997527 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 12:24:12 np0005485008 nova_compute[192512]: 2025-10-13 16:24:12.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:24:13 np0005485008 nova_compute[192512]: 2025-10-13 16:24:13.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:24:17 np0005485008 nova_compute[192512]: 2025-10-13 16:24:17.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:24:18 np0005485008 nova_compute[192512]: 2025-10-13 16:24:18.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:24:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:24:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:24:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:24:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:24:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:24:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:24:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:24:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:24:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:24:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:24:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:24:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:24:22 np0005485008 nova_compute[192512]: 2025-10-13 16:24:22.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:24:23 np0005485008 nova_compute[192512]: 2025-10-13 16:24:23.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:24:25 np0005485008 podman[229833]: 2025-10-13 16:24:25.763575576 +0000 UTC m=+0.060012621 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, distribution-scope=public, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, io.openshift.expose-services=, managed_by=edpm_ansible)
Oct 13 12:24:27 np0005485008 nova_compute[192512]: 2025-10-13 16:24:27.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:24:28 np0005485008 nova_compute[192512]: 2025-10-13 16:24:28.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:24:32 np0005485008 nova_compute[192512]: 2025-10-13 16:24:32.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:24:33 np0005485008 nova_compute[192512]: 2025-10-13 16:24:33.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:24:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:24:33.994 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:24:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:24:33.995 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:24:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:24:33.995 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:24:35 np0005485008 podman[202884]: time="2025-10-13T16:24:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:24:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:24:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:24:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:24:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3009 "" "Go-http-client/1.1"
Oct 13 12:24:37 np0005485008 nova_compute[192512]: 2025-10-13 16:24:37.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:24:38 np0005485008 nova_compute[192512]: 2025-10-13 16:24:38.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:24:38 np0005485008 nova_compute[192512]: 2025-10-13 16:24:38.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:24:38 np0005485008 nova_compute[192512]: 2025-10-13 16:24:38.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:24:39 np0005485008 nova_compute[192512]: 2025-10-13 16:24:39.424 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:24:41 np0005485008 nova_compute[192512]: 2025-10-13 16:24:41.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:24:42 np0005485008 nova_compute[192512]: 2025-10-13 16:24:42.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:24:42 np0005485008 nova_compute[192512]: 2025-10-13 16:24:42.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:24:42 np0005485008 nova_compute[192512]: 2025-10-13 16:24:42.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:24:42 np0005485008 nova_compute[192512]: 2025-10-13 16:24:42.445 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:24:42 np0005485008 nova_compute[192512]: 2025-10-13 16:24:42.445 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:24:42 np0005485008 podman[229855]: 2025-10-13 16:24:42.751384541 +0000 UTC m=+0.053115056 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid)
Oct 13 12:24:42 np0005485008 podman[229857]: 2025-10-13 16:24:42.761444654 +0000 UTC m=+0.056229413 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 12:24:42 np0005485008 podman[229856]: 2025-10-13 16:24:42.777438862 +0000 UTC m=+0.077167085 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 13 12:24:42 np0005485008 podman[229854]: 2025-10-13 16:24:42.77771166 +0000 UTC m=+0.085271427 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 13 12:24:42 np0005485008 podman[229864]: 2025-10-13 16:24:42.799615453 +0000 UTC m=+0.091183592 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 12:24:42 np0005485008 nova_compute[192512]: 2025-10-13 16:24:42.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:24:43 np0005485008 nova_compute[192512]: 2025-10-13 16:24:43.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:24:45 np0005485008 nova_compute[192512]: 2025-10-13 16:24:45.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:24:46 np0005485008 nova_compute[192512]: 2025-10-13 16:24:46.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:24:46 np0005485008 nova_compute[192512]: 2025-10-13 16:24:46.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:24:47 np0005485008 nova_compute[192512]: 2025-10-13 16:24:47.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:24:47 np0005485008 nova_compute[192512]: 2025-10-13 16:24:47.464 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:24:47 np0005485008 nova_compute[192512]: 2025-10-13 16:24:47.465 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:24:47 np0005485008 nova_compute[192512]: 2025-10-13 16:24:47.465 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:24:47 np0005485008 nova_compute[192512]: 2025-10-13 16:24:47.465 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:24:47 np0005485008 nova_compute[192512]: 2025-10-13 16:24:47.627 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:24:47 np0005485008 nova_compute[192512]: 2025-10-13 16:24:47.628 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5869MB free_disk=73.4631118774414GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:24:47 np0005485008 nova_compute[192512]: 2025-10-13 16:24:47.629 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:24:47 np0005485008 nova_compute[192512]: 2025-10-13 16:24:47.629 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:24:47 np0005485008 nova_compute[192512]: 2025-10-13 16:24:47.694 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:24:47 np0005485008 nova_compute[192512]: 2025-10-13 16:24:47.694 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:24:47 np0005485008 nova_compute[192512]: 2025-10-13 16:24:47.716 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:24:47 np0005485008 nova_compute[192512]: 2025-10-13 16:24:47.735 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:24:47 np0005485008 nova_compute[192512]: 2025-10-13 16:24:47.736 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:24:47 np0005485008 nova_compute[192512]: 2025-10-13 16:24:47.736 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:24:47 np0005485008 nova_compute[192512]: 2025-10-13 16:24:47.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:24:48 np0005485008 nova_compute[192512]: 2025-10-13 16:24:48.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:24:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:24:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:24:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:24:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:24:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:24:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:24:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:24:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:24:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:24:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:24:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:24:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:24:52 np0005485008 nova_compute[192512]: 2025-10-13 16:24:52.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:24:53 np0005485008 nova_compute[192512]: 2025-10-13 16:24:53.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:24:56 np0005485008 podman[229953]: 2025-10-13 16:24:56.765568963 +0000 UTC m=+0.073458080 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, release=1755695350, version=9.6, name=ubi9-minimal, io.openshift.expose-services=, architecture=x86_64)
Oct 13 12:24:57 np0005485008 nova_compute[192512]: 2025-10-13 16:24:57.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:24:58 np0005485008 nova_compute[192512]: 2025-10-13 16:24:58.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.434 2 DEBUG oslo_concurrency.lockutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Acquiring lock "d43a5291-85ee-427c-b4b1-aa493ae09f02" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.435 2 DEBUG oslo_concurrency.lockutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lock "d43a5291-85ee-427c-b4b1-aa493ae09f02" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.452 2 DEBUG nova.compute.manager [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.527 2 DEBUG oslo_concurrency.lockutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.528 2 DEBUG oslo_concurrency.lockutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.537 2 DEBUG nova.virt.hardware [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.538 2 INFO nova.compute.claims [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.647 2 DEBUG nova.compute.provider_tree [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.664 2 DEBUG nova.scheduler.client.report [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.685 2 DEBUG oslo_concurrency.lockutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.686 2 DEBUG nova.compute.manager [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.726 2 DEBUG nova.compute.manager [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.726 2 DEBUG nova.network.neutron [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.754 2 INFO nova.virt.libvirt.driver [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.772 2 DEBUG nova.compute.manager [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.879 2 DEBUG nova.compute.manager [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.881 2 DEBUG nova.virt.libvirt.driver [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.881 2 INFO nova.virt.libvirt.driver [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Creating image(s)#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.882 2 DEBUG oslo_concurrency.lockutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Acquiring lock "/var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.883 2 DEBUG oslo_concurrency.lockutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lock "/var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.884 2 DEBUG oslo_concurrency.lockutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lock "/var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.902 2 DEBUG oslo_concurrency.processutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.970 2 DEBUG oslo_concurrency.processutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.971 2 DEBUG oslo_concurrency.lockutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.972 2 DEBUG oslo_concurrency.lockutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:25:02 np0005485008 nova_compute[192512]: 2025-10-13 16:25:02.990 2 DEBUG oslo_concurrency.processutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:25:03 np0005485008 nova_compute[192512]: 2025-10-13 16:25:03.048 2 DEBUG oslo_concurrency.processutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:25:03 np0005485008 nova_compute[192512]: 2025-10-13 16:25:03.050 2 DEBUG oslo_concurrency.processutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:25:03 np0005485008 nova_compute[192512]: 2025-10-13 16:25:03.082 2 DEBUG oslo_concurrency.processutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02/disk 1073741824" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:25:03 np0005485008 nova_compute[192512]: 2025-10-13 16:25:03.083 2 DEBUG oslo_concurrency.lockutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:25:03 np0005485008 nova_compute[192512]: 2025-10-13 16:25:03.084 2 DEBUG oslo_concurrency.processutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:25:03 np0005485008 nova_compute[192512]: 2025-10-13 16:25:03.142 2 DEBUG oslo_concurrency.processutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:25:03 np0005485008 nova_compute[192512]: 2025-10-13 16:25:03.143 2 DEBUG nova.virt.disk.api [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Checking if we can resize image /var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 12:25:03 np0005485008 nova_compute[192512]: 2025-10-13 16:25:03.143 2 DEBUG oslo_concurrency.processutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:25:03 np0005485008 nova_compute[192512]: 2025-10-13 16:25:03.201 2 DEBUG oslo_concurrency.processutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:25:03 np0005485008 nova_compute[192512]: 2025-10-13 16:25:03.202 2 DEBUG nova.virt.disk.api [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Cannot resize image /var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 12:25:03 np0005485008 nova_compute[192512]: 2025-10-13 16:25:03.203 2 DEBUG nova.objects.instance [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lazy-loading 'migration_context' on Instance uuid d43a5291-85ee-427c-b4b1-aa493ae09f02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:25:03 np0005485008 nova_compute[192512]: 2025-10-13 16:25:03.233 2 DEBUG nova.virt.libvirt.driver [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct 13 12:25:03 np0005485008 nova_compute[192512]: 2025-10-13 16:25:03.233 2 DEBUG nova.virt.libvirt.driver [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Ensure instance console log exists: /var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct 13 12:25:03 np0005485008 nova_compute[192512]: 2025-10-13 16:25:03.234 2 DEBUG oslo_concurrency.lockutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:25:03 np0005485008 nova_compute[192512]: 2025-10-13 16:25:03.234 2 DEBUG oslo_concurrency.lockutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:25:03 np0005485008 nova_compute[192512]: 2025-10-13 16:25:03.235 2 DEBUG oslo_concurrency.lockutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:25:03 np0005485008 nova_compute[192512]: 2025-10-13 16:25:03.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:03 np0005485008 nova_compute[192512]: 2025-10-13 16:25:03.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:03.495 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:25:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:03.496 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 12:25:03 np0005485008 nova_compute[192512]: 2025-10-13 16:25:03.734 2 DEBUG nova.network.neutron [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Successfully created port: 1ad5c353-b67b-40e3-bd20-b089f31d32e9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct 13 12:25:05 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:05.499 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:25:05 np0005485008 podman[202884]: time="2025-10-13T16:25:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:25:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:25:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:25:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:25:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3015 "" "Go-http-client/1.1"
Oct 13 12:25:07 np0005485008 nova_compute[192512]: 2025-10-13 16:25:07.438 2 DEBUG nova.network.neutron [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Successfully updated port: 1ad5c353-b67b-40e3-bd20-b089f31d32e9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct 13 12:25:07 np0005485008 nova_compute[192512]: 2025-10-13 16:25:07.451 2 DEBUG oslo_concurrency.lockutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Acquiring lock "refresh_cache-d43a5291-85ee-427c-b4b1-aa493ae09f02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:25:07 np0005485008 nova_compute[192512]: 2025-10-13 16:25:07.452 2 DEBUG oslo_concurrency.lockutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Acquired lock "refresh_cache-d43a5291-85ee-427c-b4b1-aa493ae09f02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:25:07 np0005485008 nova_compute[192512]: 2025-10-13 16:25:07.452 2 DEBUG nova.network.neutron [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 12:25:07 np0005485008 nova_compute[192512]: 2025-10-13 16:25:07.518 2 DEBUG nova.compute.manager [req-8168b625-4c3f-4f02-af0a-9bc3cadac532 req-a4cd79be-fa09-4cc1-8ada-d71c630350b3 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Received event network-changed-1ad5c353-b67b-40e3-bd20-b089f31d32e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:25:07 np0005485008 nova_compute[192512]: 2025-10-13 16:25:07.518 2 DEBUG nova.compute.manager [req-8168b625-4c3f-4f02-af0a-9bc3cadac532 req-a4cd79be-fa09-4cc1-8ada-d71c630350b3 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Refreshing instance network info cache due to event network-changed-1ad5c353-b67b-40e3-bd20-b089f31d32e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct 13 12:25:07 np0005485008 nova_compute[192512]: 2025-10-13 16:25:07.519 2 DEBUG oslo_concurrency.lockutils [req-8168b625-4c3f-4f02-af0a-9bc3cadac532 req-a4cd79be-fa09-4cc1-8ada-d71c630350b3 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-d43a5291-85ee-427c-b4b1-aa493ae09f02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:25:07 np0005485008 nova_compute[192512]: 2025-10-13 16:25:07.601 2 DEBUG nova.network.neutron [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct 13 12:25:07 np0005485008 nova_compute[192512]: 2025-10-13 16:25:07.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.145 2 DEBUG nova.network.neutron [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Updating instance_info_cache with network_info: [{"id": "1ad5c353-b67b-40e3-bd20-b089f31d32e9", "address": "fa:16:3e:0f:8b:ca", "network": {"id": "6eec20f4-a93b-4c67-a33f-a03051c51d88", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-1854145690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d1e08aa8dd5f4ba4a4f76c59beb6b64f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ad5c353-b6", "ovs_interfaceid": "1ad5c353-b67b-40e3-bd20-b089f31d32e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.164 2 DEBUG oslo_concurrency.lockutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Releasing lock "refresh_cache-d43a5291-85ee-427c-b4b1-aa493ae09f02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.165 2 DEBUG nova.compute.manager [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Instance network_info: |[{"id": "1ad5c353-b67b-40e3-bd20-b089f31d32e9", "address": "fa:16:3e:0f:8b:ca", "network": {"id": "6eec20f4-a93b-4c67-a33f-a03051c51d88", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-1854145690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d1e08aa8dd5f4ba4a4f76c59beb6b64f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ad5c353-b6", "ovs_interfaceid": "1ad5c353-b67b-40e3-bd20-b089f31d32e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.165 2 DEBUG oslo_concurrency.lockutils [req-8168b625-4c3f-4f02-af0a-9bc3cadac532 req-a4cd79be-fa09-4cc1-8ada-d71c630350b3 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-d43a5291-85ee-427c-b4b1-aa493ae09f02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.165 2 DEBUG nova.network.neutron [req-8168b625-4c3f-4f02-af0a-9bc3cadac532 req-a4cd79be-fa09-4cc1-8ada-d71c630350b3 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Refreshing network info cache for port 1ad5c353-b67b-40e3-bd20-b089f31d32e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.168 2 DEBUG nova.virt.libvirt.driver [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Start _get_guest_xml network_info=[{"id": "1ad5c353-b67b-40e3-bd20-b089f31d32e9", "address": "fa:16:3e:0f:8b:ca", "network": {"id": "6eec20f4-a93b-4c67-a33f-a03051c51d88", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-1854145690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d1e08aa8dd5f4ba4a4f76c59beb6b64f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ad5c353-b6", "ovs_interfaceid": "1ad5c353-b67b-40e3-bd20-b089f31d32e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'boot_index': 0, 'disk_bus': 'virtio', 'device_type': 'disk', 'image_id': 'dcd9fbd3-16ab-46e1-976e-0576b433c9d5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.174 2 WARNING nova.virt.libvirt.driver [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.179 2 DEBUG nova.virt.libvirt.host [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.180 2 DEBUG nova.virt.libvirt.host [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.186 2 DEBUG nova.virt.libvirt.host [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.187 2 DEBUG nova.virt.libvirt.host [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.188 2 DEBUG nova.virt.libvirt.driver [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.188 2 DEBUG nova.virt.hardware [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-13T15:39:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ddff5c88-cac9-460e-8ffb-1e9b9a7c2a59',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-13T15:39:46Z,direct_url=<?>,disk_format='qcow2',id=dcd9fbd3-16ab-46e1-976e-0576b433c9d5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='d93a2ce330a244f186b39e1ea3fc96a4',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-13T15:39:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.189 2 DEBUG nova.virt.hardware [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.189 2 DEBUG nova.virt.hardware [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.190 2 DEBUG nova.virt.hardware [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.190 2 DEBUG nova.virt.hardware [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.190 2 DEBUG nova.virt.hardware [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.191 2 DEBUG nova.virt.hardware [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.191 2 DEBUG nova.virt.hardware [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.192 2 DEBUG nova.virt.hardware [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.192 2 DEBUG nova.virt.hardware [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.192 2 DEBUG nova.virt.hardware [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.198 2 DEBUG nova.virt.libvirt.vif [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T16:25:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-375445520',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-375445520',id=38,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8070a861003e4015ac392983e3444a1c',ramdisk_id='',reservation_id='r-mumyo7w1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-555259936',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-555259936-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T16:25:02Z,user_data=None,user_id='fa4b59a04fe44b478d878bb3964dfc67',uuid=d43a5291-85ee-427c-b4b1-aa493ae09f02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ad5c353-b67b-40e3-bd20-b089f31d32e9", "address": "fa:16:3e:0f:8b:ca", "network": {"id": "6eec20f4-a93b-4c67-a33f-a03051c51d88", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-1854145690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d1e08aa8dd5f4ba4a4f76c59beb6b64f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ad5c353-b6", "ovs_interfaceid": "1ad5c353-b67b-40e3-bd20-b089f31d32e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.199 2 DEBUG nova.network.os_vif_util [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Converting VIF {"id": "1ad5c353-b67b-40e3-bd20-b089f31d32e9", "address": "fa:16:3e:0f:8b:ca", "network": {"id": "6eec20f4-a93b-4c67-a33f-a03051c51d88", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-1854145690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d1e08aa8dd5f4ba4a4f76c59beb6b64f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ad5c353-b6", "ovs_interfaceid": "1ad5c353-b67b-40e3-bd20-b089f31d32e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.199 2 DEBUG nova.network.os_vif_util [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:8b:ca,bridge_name='br-int',has_traffic_filtering=True,id=1ad5c353-b67b-40e3-bd20-b089f31d32e9,network=Network(6eec20f4-a93b-4c67-a33f-a03051c51d88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ad5c353-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.200 2 DEBUG nova.objects.instance [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lazy-loading 'pci_devices' on Instance uuid d43a5291-85ee-427c-b4b1-aa493ae09f02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.215 2 DEBUG nova.virt.libvirt.driver [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] End _get_guest_xml xml=<domain type="kvm">
Oct 13 12:25:08 np0005485008 nova_compute[192512]:  <uuid>d43a5291-85ee-427c-b4b1-aa493ae09f02</uuid>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:  <name>instance-00000026</name>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:  <memory>131072</memory>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:  <vcpu>1</vcpu>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:  <metadata>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <nova:name>tempest-TestExecuteZoneMigrationStrategy-server-375445520</nova:name>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <nova:creationTime>2025-10-13 16:25:08</nova:creationTime>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <nova:flavor name="m1.nano">
Oct 13 12:25:08 np0005485008 nova_compute[192512]:        <nova:memory>128</nova:memory>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:        <nova:disk>1</nova:disk>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:        <nova:swap>0</nova:swap>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:        <nova:ephemeral>0</nova:ephemeral>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:        <nova:vcpus>1</nova:vcpus>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      </nova:flavor>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <nova:owner>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:        <nova:user uuid="fa4b59a04fe44b478d878bb3964dfc67">tempest-TestExecuteZoneMigrationStrategy-555259936-project-admin</nova:user>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:        <nova:project uuid="8070a861003e4015ac392983e3444a1c">tempest-TestExecuteZoneMigrationStrategy-555259936</nova:project>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      </nova:owner>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <nova:root type="image" uuid="dcd9fbd3-16ab-46e1-976e-0576b433c9d5"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <nova:ports>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:        <nova:port uuid="1ad5c353-b67b-40e3-bd20-b089f31d32e9">
Oct 13 12:25:08 np0005485008 nova_compute[192512]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:        </nova:port>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      </nova:ports>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    </nova:instance>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:  </metadata>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:  <sysinfo type="smbios">
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <system>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <entry name="manufacturer">RDO</entry>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <entry name="product">OpenStack Compute</entry>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <entry name="serial">d43a5291-85ee-427c-b4b1-aa493ae09f02</entry>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <entry name="uuid">d43a5291-85ee-427c-b4b1-aa493ae09f02</entry>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <entry name="family">Virtual Machine</entry>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    </system>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:  </sysinfo>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:  <os>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <type arch="x86_64" machine="q35">hvm</type>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <boot dev="hd"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <smbios mode="sysinfo"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:  </os>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:  <features>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <acpi/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <apic/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <vmcoreinfo/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:  </features>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:  <clock offset="utc">
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <timer name="pit" tickpolicy="delay"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <timer name="rtc" tickpolicy="catchup"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <timer name="hpet" present="no"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:  </clock>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:  <cpu mode="host-model" match="exact">
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <topology sockets="1" cores="1" threads="1"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:  </cpu>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:  <devices>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <disk type="file" device="disk">
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <driver name="qemu" type="qcow2" cache="none"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02/disk"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <target dev="vda" bus="virtio"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    </disk>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <disk type="file" device="cdrom">
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <driver name="qemu" type="raw" cache="none"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <source file="/var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02/disk.config"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <target dev="sda" bus="sata"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    </disk>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <interface type="ethernet">
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <mac address="fa:16:3e:0f:8b:ca"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <driver name="vhost" rx_queue_size="512"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <mtu size="1442"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <target dev="tap1ad5c353-b6"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    </interface>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <serial type="pty">
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <log file="/var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02/console.log" append="off"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    </serial>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <video>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <model type="virtio"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    </video>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <input type="tablet" bus="usb"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <rng model="virtio">
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <backend model="random">/dev/urandom</backend>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    </rng>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="pci" model="pcie-root-port"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <controller type="usb" index="0"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    <memballoon model="virtio">
Oct 13 12:25:08 np0005485008 nova_compute[192512]:      <stats period="10"/>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:    </memballoon>
Oct 13 12:25:08 np0005485008 nova_compute[192512]:  </devices>
Oct 13 12:25:08 np0005485008 nova_compute[192512]: </domain>
Oct 13 12:25:08 np0005485008 nova_compute[192512]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.217 2 DEBUG nova.compute.manager [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Preparing to wait for external event network-vif-plugged-1ad5c353-b67b-40e3-bd20-b089f31d32e9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.217 2 DEBUG oslo_concurrency.lockutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Acquiring lock "d43a5291-85ee-427c-b4b1-aa493ae09f02-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.218 2 DEBUG oslo_concurrency.lockutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lock "d43a5291-85ee-427c-b4b1-aa493ae09f02-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.218 2 DEBUG oslo_concurrency.lockutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lock "d43a5291-85ee-427c-b4b1-aa493ae09f02-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.219 2 DEBUG nova.virt.libvirt.vif [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-13T16:25:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-375445520',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-375445520',id=38,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8070a861003e4015ac392983e3444a1c',ramdisk_id='',reservation_id='r-mumyo7w1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-555259936',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-555259936-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T16:25:02Z,user_data=None,user_id='fa4b59a04fe44b478d878bb3964dfc67',uuid=d43a5291-85ee-427c-b4b1-aa493ae09f02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ad5c353-b67b-40e3-bd20-b089f31d32e9", "address": "fa:16:3e:0f:8b:ca", "network": {"id": "6eec20f4-a93b-4c67-a33f-a03051c51d88", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-1854145690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d1e08aa8dd5f4ba4a4f76c59beb6b64f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ad5c353-b6", "ovs_interfaceid": "1ad5c353-b67b-40e3-bd20-b089f31d32e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.219 2 DEBUG nova.network.os_vif_util [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Converting VIF {"id": "1ad5c353-b67b-40e3-bd20-b089f31d32e9", "address": "fa:16:3e:0f:8b:ca", "network": {"id": "6eec20f4-a93b-4c67-a33f-a03051c51d88", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-1854145690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d1e08aa8dd5f4ba4a4f76c59beb6b64f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ad5c353-b6", "ovs_interfaceid": "1ad5c353-b67b-40e3-bd20-b089f31d32e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.220 2 DEBUG nova.network.os_vif_util [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:8b:ca,bridge_name='br-int',has_traffic_filtering=True,id=1ad5c353-b67b-40e3-bd20-b089f31d32e9,network=Network(6eec20f4-a93b-4c67-a33f-a03051c51d88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ad5c353-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.220 2 DEBUG os_vif [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:8b:ca,bridge_name='br-int',has_traffic_filtering=True,id=1ad5c353-b67b-40e3-bd20-b089f31d32e9,network=Network(6eec20f4-a93b-4c67-a33f-a03051c51d88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ad5c353-b6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.221 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.222 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.228 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ad5c353-b6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.228 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1ad5c353-b6, col_values=(('external_ids', {'iface-id': '1ad5c353-b67b-40e3-bd20-b089f31d32e9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0f:8b:ca', 'vm-uuid': 'd43a5291-85ee-427c-b4b1-aa493ae09f02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:08 np0005485008 NetworkManager[51587]: <info>  [1760372708.2326] manager: (tap1ad5c353-b6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.239 2 INFO os_vif [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:8b:ca,bridge_name='br-int',has_traffic_filtering=True,id=1ad5c353-b67b-40e3-bd20-b089f31d32e9,network=Network(6eec20f4-a93b-4c67-a33f-a03051c51d88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ad5c353-b6')#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.295 2 DEBUG nova.virt.libvirt.driver [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.296 2 DEBUG nova.virt.libvirt.driver [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.296 2 DEBUG nova.virt.libvirt.driver [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] No VIF found with MAC fa:16:3e:0f:8b:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.297 2 INFO nova.virt.libvirt.driver [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Using config drive#033[00m
Oct 13 12:25:08 np0005485008 nova_compute[192512]: 2025-10-13 16:25:08.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:09 np0005485008 nova_compute[192512]: 2025-10-13 16:25:09.256 2 INFO nova.virt.libvirt.driver [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Creating config drive at /var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02/disk.config#033[00m
Oct 13 12:25:09 np0005485008 nova_compute[192512]: 2025-10-13 16:25:09.261 2 DEBUG oslo_concurrency.processutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq_u4e40s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:25:09 np0005485008 nova_compute[192512]: 2025-10-13 16:25:09.388 2 DEBUG oslo_concurrency.processutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq_u4e40s" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:25:09 np0005485008 kernel: tap1ad5c353-b6: entered promiscuous mode
Oct 13 12:25:09 np0005485008 NetworkManager[51587]: <info>  [1760372709.4526] manager: (tap1ad5c353-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/111)
Oct 13 12:25:09 np0005485008 ovn_controller[94758]: 2025-10-13T16:25:09Z|00310|binding|INFO|Claiming lport 1ad5c353-b67b-40e3-bd20-b089f31d32e9 for this chassis.
Oct 13 12:25:09 np0005485008 nova_compute[192512]: 2025-10-13 16:25:09.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:09 np0005485008 ovn_controller[94758]: 2025-10-13T16:25:09Z|00311|binding|INFO|1ad5c353-b67b-40e3-bd20-b089f31d32e9: Claiming fa:16:3e:0f:8b:ca 10.100.0.13
Oct 13 12:25:09 np0005485008 nova_compute[192512]: 2025-10-13 16:25:09.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:09 np0005485008 nova_compute[192512]: 2025-10-13 16:25:09.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.482 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:8b:ca 10.100.0.13'], port_security=['fa:16:3e:0f:8b:ca 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd43a5291-85ee-427c-b4b1-aa493ae09f02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6eec20f4-a93b-4c67-a33f-a03051c51d88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8070a861003e4015ac392983e3444a1c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b4a7b065-b6b3-4ca9-a2f2-9ce57a948736', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d76a7fa5-2ace-4961-8c3f-1bc066b9377f, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=1ad5c353-b67b-40e3-bd20-b089f31d32e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.483 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 1ad5c353-b67b-40e3-bd20-b089f31d32e9 in datapath 6eec20f4-a93b-4c67-a33f-a03051c51d88 bound to our chassis#033[00m
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.484 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6eec20f4-a93b-4c67-a33f-a03051c51d88#033[00m
Oct 13 12:25:09 np0005485008 systemd-udevd[230009]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 12:25:09 np0005485008 systemd-machined[152551]: New machine qemu-26-instance-00000026.
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.498 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[98033c24-6207-4414-86b9-84a0d452038f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.499 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6eec20f4-a1 in ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.501 214965 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6eec20f4-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.502 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[7c3528f7-de95-4c69-9ef9-644f97ef7994]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.503 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[3316d5f2-e1cc-43f2-acdf-a960cfc8de07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:25:09 np0005485008 NetworkManager[51587]: <info>  [1760372709.5059] device (tap1ad5c353-b6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 12:25:09 np0005485008 NetworkManager[51587]: <info>  [1760372709.5069] device (tap1ad5c353-b6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 12:25:09 np0005485008 ovn_controller[94758]: 2025-10-13T16:25:09Z|00312|binding|INFO|Setting lport 1ad5c353-b67b-40e3-bd20-b089f31d32e9 ovn-installed in OVS
Oct 13 12:25:09 np0005485008 ovn_controller[94758]: 2025-10-13T16:25:09Z|00313|binding|INFO|Setting lport 1ad5c353-b67b-40e3-bd20-b089f31d32e9 up in Southbound
Oct 13 12:25:09 np0005485008 systemd[1]: Started Virtual Machine qemu-26-instance-00000026.
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.519 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[dfd65b9c-e5c7-446b-9624-73dc3c4a8dc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:25:09 np0005485008 nova_compute[192512]: 2025-10-13 16:25:09.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.538 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[24d0e623-cbb6-4569-9caf-a169748bf5c5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.568 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[ae611b77-438f-4bd3-8e8d-4fc0ff606989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:25:09 np0005485008 NetworkManager[51587]: <info>  [1760372709.5755] manager: (tap6eec20f4-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/112)
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.574 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[0120d9fd-b215-439c-b13b-6ca1ba9adc65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:25:09 np0005485008 systemd-udevd[230013]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.612 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[897e365e-5ba9-40a5-8c38-bcb83a4f61fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.616 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[558eceeb-1c86-44f1-b356-34db21bdf873]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:25:09 np0005485008 NetworkManager[51587]: <info>  [1760372709.6453] device (tap6eec20f4-a0): carrier: link connected
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.650 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[82b2fee0-130b-43fa-917f-49e48b821385]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.669 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[0220bd37-ad8f-4143-9c96-0f4886634e1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6eec20f4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:dc:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624731, 'reachable_time': 34815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 230042, 'error': None, 'target': 'ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.684 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[91466247-fe41-4524-a193-5f41a00d6f39]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea2:dc92'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624731, 'tstamp': 624731}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 230043, 'error': None, 'target': 'ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.697 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0494d9-ad11-4f8c-80b0-e9bd89d23cdf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6eec20f4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:dc:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624731, 'reachable_time': 34815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 230044, 'error': None, 'target': 'ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.726 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[f439dafc-a853-4d51-b67b-7ea3a66d18cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.781 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[63ddf8ab-a684-4df5-9714-b62ca5c7f313]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.783 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6eec20f4-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.784 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.784 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6eec20f4-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:25:09 np0005485008 kernel: tap6eec20f4-a0: entered promiscuous mode
Oct 13 12:25:09 np0005485008 nova_compute[192512]: 2025-10-13 16:25:09.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:09 np0005485008 nova_compute[192512]: 2025-10-13 16:25:09.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:09 np0005485008 NetworkManager[51587]: <info>  [1760372709.8041] manager: (tap6eec20f4-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/113)
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.804 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6eec20f4-a0, col_values=(('external_ids', {'iface-id': '673b3766-45cd-4d9e-aa60-c456a25db44e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:25:09 np0005485008 nova_compute[192512]: 2025-10-13 16:25:09.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:09 np0005485008 nova_compute[192512]: 2025-10-13 16:25:09.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:09 np0005485008 ovn_controller[94758]: 2025-10-13T16:25:09Z|00314|binding|INFO|Releasing lport 673b3766-45cd-4d9e-aa60-c456a25db44e from this chassis (sb_readonly=0)
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.807 103642 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6eec20f4-a93b-4c67-a33f-a03051c51d88.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6eec20f4-a93b-4c67-a33f-a03051c51d88.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.808 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[c477a00d-bfd9-42f3-8550-5800406b5731]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.809 103642 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: global
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]:    log         /dev/log local0 debug
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]:    log-tag     haproxy-metadata-proxy-6eec20f4-a93b-4c67-a33f-a03051c51d88
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]:    user        root
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]:    group       root
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]:    maxconn     1024
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]:    pidfile     /var/lib/neutron/external/pids/6eec20f4-a93b-4c67-a33f-a03051c51d88.pid.haproxy
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]:    daemon
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: defaults
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]:    log global
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]:    mode http
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]:    option httplog
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]:    option dontlognull
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]:    option http-server-close
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]:    option forwardfor
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]:    retries                 3
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]:    timeout http-request    30s
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]:    timeout connect         30s
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]:    timeout client          32s
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]:    timeout server          32s
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]:    timeout http-keep-alive 30s
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: listen listener
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]:    bind 169.254.169.254:80
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]:    server metadata /var/lib/neutron/metadata_proxy
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]:    http-request add-header X-OVN-Network-ID 6eec20f4-a93b-4c67-a33f-a03051c51d88
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct 13 12:25:09 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:09.811 103642 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88', 'env', 'PROCESS_TAG=haproxy-6eec20f4-a93b-4c67-a33f-a03051c51d88', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6eec20f4-a93b-4c67-a33f-a03051c51d88.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct 13 12:25:09 np0005485008 nova_compute[192512]: 2025-10-13 16:25:09.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.054 2 DEBUG nova.compute.manager [req-ec3dd402-39f8-4318-a964-ae74d7dd73ff req-5f342aca-be99-4986-9d2c-c63c3efb5c38 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Received event network-vif-plugged-1ad5c353-b67b-40e3-bd20-b089f31d32e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.059 2 DEBUG oslo_concurrency.lockutils [req-ec3dd402-39f8-4318-a964-ae74d7dd73ff req-5f342aca-be99-4986-9d2c-c63c3efb5c38 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "d43a5291-85ee-427c-b4b1-aa493ae09f02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.059 2 DEBUG oslo_concurrency.lockutils [req-ec3dd402-39f8-4318-a964-ae74d7dd73ff req-5f342aca-be99-4986-9d2c-c63c3efb5c38 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "d43a5291-85ee-427c-b4b1-aa493ae09f02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.060 2 DEBUG oslo_concurrency.lockutils [req-ec3dd402-39f8-4318-a964-ae74d7dd73ff req-5f342aca-be99-4986-9d2c-c63c3efb5c38 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "d43a5291-85ee-427c-b4b1-aa493ae09f02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.060 2 DEBUG nova.compute.manager [req-ec3dd402-39f8-4318-a964-ae74d7dd73ff req-5f342aca-be99-4986-9d2c-c63c3efb5c38 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Processing event network-vif-plugged-1ad5c353-b67b-40e3-bd20-b089f31d32e9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.210 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760372710.2096515, d43a5291-85ee-427c-b4b1-aa493ae09f02 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.212 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] VM Started (Lifecycle Event)#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.217 2 DEBUG nova.compute.manager [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.222 2 DEBUG nova.virt.libvirt.driver [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.227 2 INFO nova.virt.libvirt.driver [-] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Instance spawned successfully.#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.228 2 DEBUG nova.virt.libvirt.driver [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.235 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:25:10 np0005485008 podman[230083]: 2025-10-13 16:25:10.238132203 +0000 UTC m=+0.065898566 container create 5e7a0b380c603c375199e30fa9687f5756626e087d70ac54ab1a77a77a22be02 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.240 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.251 2 DEBUG nova.virt.libvirt.driver [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.252 2 DEBUG nova.virt.libvirt.driver [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.253 2 DEBUG nova.virt.libvirt.driver [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.253 2 DEBUG nova.virt.libvirt.driver [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.253 2 DEBUG nova.virt.libvirt.driver [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.254 2 DEBUG nova.virt.libvirt.driver [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.259 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.259 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760372710.2097976, d43a5291-85ee-427c-b4b1-aa493ae09f02 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.259 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] VM Paused (Lifecycle Event)#033[00m
Oct 13 12:25:10 np0005485008 systemd[1]: Started libpod-conmon-5e7a0b380c603c375199e30fa9687f5756626e087d70ac54ab1a77a77a22be02.scope.
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.288 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:25:10 np0005485008 podman[230083]: 2025-10-13 16:25:10.198256116 +0000 UTC m=+0.026022509 image pull f5d8e43d5fd113645e71c7e8980543c233298a7561a4c95ab3af27cb119e70b7 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.292 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760372710.221998, d43a5291-85ee-427c-b4b1-aa493ae09f02 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.292 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] VM Resumed (Lifecycle Event)#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.315 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:25:10 np0005485008 systemd[1]: Started libcrun container.
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.319 2 INFO nova.compute.manager [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Took 7.44 seconds to spawn the instance on the hypervisor.#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.320 2 DEBUG nova.compute.manager [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.321 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 12:25:10 np0005485008 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17f7e072c56cf2d4406d59b041ae1b48eacf18894fadafba4003f01fa10ec119/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.339 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct 13 12:25:10 np0005485008 podman[230083]: 2025-10-13 16:25:10.342633843 +0000 UTC m=+0.170400226 container init 5e7a0b380c603c375199e30fa9687f5756626e087d70ac54ab1a77a77a22be02 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 13 12:25:10 np0005485008 podman[230083]: 2025-10-13 16:25:10.349074743 +0000 UTC m=+0.176841116 container start 5e7a0b380c603c375199e30fa9687f5756626e087d70ac54ab1a77a77a22be02 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 12:25:10 np0005485008 neutron-haproxy-ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88[230098]: [NOTICE]   (230103) : New worker (230105) forked
Oct 13 12:25:10 np0005485008 neutron-haproxy-ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88[230098]: [NOTICE]   (230103) : Loading success.
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.403 2 INFO nova.compute.manager [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Took 7.90 seconds to build instance.#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.446 2 DEBUG nova.network.neutron [req-8168b625-4c3f-4f02-af0a-9bc3cadac532 req-a4cd79be-fa09-4cc1-8ada-d71c630350b3 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Updated VIF entry in instance network info cache for port 1ad5c353-b67b-40e3-bd20-b089f31d32e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.446 2 DEBUG nova.network.neutron [req-8168b625-4c3f-4f02-af0a-9bc3cadac532 req-a4cd79be-fa09-4cc1-8ada-d71c630350b3 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Updating instance_info_cache with network_info: [{"id": "1ad5c353-b67b-40e3-bd20-b089f31d32e9", "address": "fa:16:3e:0f:8b:ca", "network": {"id": "6eec20f4-a93b-4c67-a33f-a03051c51d88", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-1854145690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d1e08aa8dd5f4ba4a4f76c59beb6b64f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ad5c353-b6", "ovs_interfaceid": "1ad5c353-b67b-40e3-bd20-b089f31d32e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.449 2 DEBUG oslo_concurrency.lockutils [None req-c80b0736-884b-4454-9253-7c988eb2fe23 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lock "d43a5291-85ee-427c-b4b1-aa493ae09f02" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.014s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:25:10 np0005485008 nova_compute[192512]: 2025-10-13 16:25:10.462 2 DEBUG oslo_concurrency.lockutils [req-8168b625-4c3f-4f02-af0a-9bc3cadac532 req-a4cd79be-fa09-4cc1-8ada-d71c630350b3 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-d43a5291-85ee-427c-b4b1-aa493ae09f02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:25:12 np0005485008 nova_compute[192512]: 2025-10-13 16:25:12.156 2 DEBUG nova.compute.manager [req-e153dcad-e37b-44e9-a2d9-c86af4d4370e req-bfaed8a1-29c5-44f9-be74-6441e681c0c0 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Received event network-vif-plugged-1ad5c353-b67b-40e3-bd20-b089f31d32e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:25:12 np0005485008 nova_compute[192512]: 2025-10-13 16:25:12.157 2 DEBUG oslo_concurrency.lockutils [req-e153dcad-e37b-44e9-a2d9-c86af4d4370e req-bfaed8a1-29c5-44f9-be74-6441e681c0c0 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "d43a5291-85ee-427c-b4b1-aa493ae09f02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:25:12 np0005485008 nova_compute[192512]: 2025-10-13 16:25:12.157 2 DEBUG oslo_concurrency.lockutils [req-e153dcad-e37b-44e9-a2d9-c86af4d4370e req-bfaed8a1-29c5-44f9-be74-6441e681c0c0 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "d43a5291-85ee-427c-b4b1-aa493ae09f02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:25:12 np0005485008 nova_compute[192512]: 2025-10-13 16:25:12.157 2 DEBUG oslo_concurrency.lockutils [req-e153dcad-e37b-44e9-a2d9-c86af4d4370e req-bfaed8a1-29c5-44f9-be74-6441e681c0c0 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "d43a5291-85ee-427c-b4b1-aa493ae09f02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:25:12 np0005485008 nova_compute[192512]: 2025-10-13 16:25:12.157 2 DEBUG nova.compute.manager [req-e153dcad-e37b-44e9-a2d9-c86af4d4370e req-bfaed8a1-29c5-44f9-be74-6441e681c0c0 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] No waiting events found dispatching network-vif-plugged-1ad5c353-b67b-40e3-bd20-b089f31d32e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:25:12 np0005485008 nova_compute[192512]: 2025-10-13 16:25:12.157 2 WARNING nova.compute.manager [req-e153dcad-e37b-44e9-a2d9-c86af4d4370e req-bfaed8a1-29c5-44f9-be74-6441e681c0c0 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Received unexpected event network-vif-plugged-1ad5c353-b67b-40e3-bd20-b089f31d32e9 for instance with vm_state active and task_state None.#033[00m
Oct 13 12:25:13 np0005485008 nova_compute[192512]: 2025-10-13 16:25:13.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:13 np0005485008 nova_compute[192512]: 2025-10-13 16:25:13.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:13 np0005485008 podman[230115]: 2025-10-13 16:25:13.787142248 +0000 UTC m=+0.074787750 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 12:25:13 np0005485008 podman[230117]: 2025-10-13 16:25:13.787966354 +0000 UTC m=+0.069630981 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 12:25:13 np0005485008 podman[230116]: 2025-10-13 16:25:13.810582705 +0000 UTC m=+0.091008814 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 12:25:13 np0005485008 podman[230114]: 2025-10-13 16:25:13.820638826 +0000 UTC m=+0.113440148 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 13 12:25:13 np0005485008 podman[230122]: 2025-10-13 16:25:13.833549626 +0000 UTC m=+0.108209556 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 12:25:18 np0005485008 nova_compute[192512]: 2025-10-13 16:25:18.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:18 np0005485008 nova_compute[192512]: 2025-10-13 16:25:18.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:25:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:25:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:25:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:25:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:25:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:25:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:25:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:25:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:25:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:25:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:25:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:25:21 np0005485008 ovn_controller[94758]: 2025-10-13T16:25:21Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0f:8b:ca 10.100.0.13
Oct 13 12:25:21 np0005485008 ovn_controller[94758]: 2025-10-13T16:25:21Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0f:8b:ca 10.100.0.13
Oct 13 12:25:23 np0005485008 nova_compute[192512]: 2025-10-13 16:25:23.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:23 np0005485008 nova_compute[192512]: 2025-10-13 16:25:23.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:27 np0005485008 podman[230235]: 2025-10-13 16:25:27.794518672 +0000 UTC m=+0.086978119 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7)
Oct 13 12:25:28 np0005485008 nova_compute[192512]: 2025-10-13 16:25:28.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:28 np0005485008 nova_compute[192512]: 2025-10-13 16:25:28.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:33 np0005485008 nova_compute[192512]: 2025-10-13 16:25:33.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:33 np0005485008 nova_compute[192512]: 2025-10-13 16:25:33.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:33.995 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:25:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:33.997 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:25:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:25:33.998 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:25:35 np0005485008 podman[202884]: time="2025-10-13T16:25:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:25:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:25:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 12:25:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:25:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3479 "" "Go-http-client/1.1"
Oct 13 12:25:38 np0005485008 nova_compute[192512]: 2025-10-13 16:25:38.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:38 np0005485008 nova_compute[192512]: 2025-10-13 16:25:38.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:38 np0005485008 nova_compute[192512]: 2025-10-13 16:25:38.737 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:25:38 np0005485008 nova_compute[192512]: 2025-10-13 16:25:38.738 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:25:39 np0005485008 ovn_controller[94758]: 2025-10-13T16:25:39Z|00315|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Oct 13 12:25:40 np0005485008 nova_compute[192512]: 2025-10-13 16:25:40.425 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:25:42 np0005485008 nova_compute[192512]: 2025-10-13 16:25:42.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:25:42 np0005485008 nova_compute[192512]: 2025-10-13 16:25:42.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:25:42 np0005485008 nova_compute[192512]: 2025-10-13 16:25:42.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:25:43 np0005485008 nova_compute[192512]: 2025-10-13 16:25:43.127 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "refresh_cache-d43a5291-85ee-427c-b4b1-aa493ae09f02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:25:43 np0005485008 nova_compute[192512]: 2025-10-13 16:25:43.128 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquired lock "refresh_cache-d43a5291-85ee-427c-b4b1-aa493ae09f02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:25:43 np0005485008 nova_compute[192512]: 2025-10-13 16:25:43.128 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 13 12:25:43 np0005485008 nova_compute[192512]: 2025-10-13 16:25:43.128 2 DEBUG nova.objects.instance [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d43a5291-85ee-427c-b4b1-aa493ae09f02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:25:43 np0005485008 nova_compute[192512]: 2025-10-13 16:25:43.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:43 np0005485008 nova_compute[192512]: 2025-10-13 16:25:43.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:44 np0005485008 podman[230257]: 2025-10-13 16:25:44.789144646 +0000 UTC m=+0.086222135 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 12:25:44 np0005485008 podman[230258]: 2025-10-13 16:25:44.799062334 +0000 UTC m=+0.092479899 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, tcib_managed=true, container_name=iscsid)
Oct 13 12:25:44 np0005485008 podman[230260]: 2025-10-13 16:25:44.832841642 +0000 UTC m=+0.122344026 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Oct 13 12:25:44 np0005485008 podman[230259]: 2025-10-13 16:25:44.832846262 +0000 UTC m=+0.122177961 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 13 12:25:44 np0005485008 podman[230261]: 2025-10-13 16:25:44.85569482 +0000 UTC m=+0.137914518 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 13 12:25:45 np0005485008 nova_compute[192512]: 2025-10-13 16:25:45.180 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Updating instance_info_cache with network_info: [{"id": "1ad5c353-b67b-40e3-bd20-b089f31d32e9", "address": "fa:16:3e:0f:8b:ca", "network": {"id": "6eec20f4-a93b-4c67-a33f-a03051c51d88", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-1854145690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d1e08aa8dd5f4ba4a4f76c59beb6b64f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ad5c353-b6", "ovs_interfaceid": "1ad5c353-b67b-40e3-bd20-b089f31d32e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:25:45 np0005485008 nova_compute[192512]: 2025-10-13 16:25:45.204 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Releasing lock "refresh_cache-d43a5291-85ee-427c-b4b1-aa493ae09f02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:25:45 np0005485008 nova_compute[192512]: 2025-10-13 16:25:45.205 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 13 12:25:45 np0005485008 nova_compute[192512]: 2025-10-13 16:25:45.205 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:25:45 np0005485008 nova_compute[192512]: 2025-10-13 16:25:45.205 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:25:46 np0005485008 nova_compute[192512]: 2025-10-13 16:25:46.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:25:46 np0005485008 nova_compute[192512]: 2025-10-13 16:25:46.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:25:47 np0005485008 nova_compute[192512]: 2025-10-13 16:25:47.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:25:48 np0005485008 nova_compute[192512]: 2025-10-13 16:25:48.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:48 np0005485008 nova_compute[192512]: 2025-10-13 16:25:48.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:25:48 np0005485008 nova_compute[192512]: 2025-10-13 16:25:48.466 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:25:48 np0005485008 nova_compute[192512]: 2025-10-13 16:25:48.467 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:25:48 np0005485008 nova_compute[192512]: 2025-10-13 16:25:48.468 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:25:48 np0005485008 nova_compute[192512]: 2025-10-13 16:25:48.468 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:25:48 np0005485008 nova_compute[192512]: 2025-10-13 16:25:48.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:48 np0005485008 nova_compute[192512]: 2025-10-13 16:25:48.583 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:25:48 np0005485008 nova_compute[192512]: 2025-10-13 16:25:48.666 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:25:48 np0005485008 nova_compute[192512]: 2025-10-13 16:25:48.668 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:25:48 np0005485008 nova_compute[192512]: 2025-10-13 16:25:48.733 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:25:48 np0005485008 nova_compute[192512]: 2025-10-13 16:25:48.895 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:25:48 np0005485008 nova_compute[192512]: 2025-10-13 16:25:48.896 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5676MB free_disk=73.43383407592773GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:25:48 np0005485008 nova_compute[192512]: 2025-10-13 16:25:48.896 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:25:48 np0005485008 nova_compute[192512]: 2025-10-13 16:25:48.897 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:25:48 np0005485008 nova_compute[192512]: 2025-10-13 16:25:48.990 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance d43a5291-85ee-427c-b4b1-aa493ae09f02 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 13 12:25:48 np0005485008 nova_compute[192512]: 2025-10-13 16:25:48.990 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:25:48 np0005485008 nova_compute[192512]: 2025-10-13 16:25:48.990 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:25:49 np0005485008 nova_compute[192512]: 2025-10-13 16:25:49.036 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:25:49 np0005485008 nova_compute[192512]: 2025-10-13 16:25:49.092 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:25:49 np0005485008 nova_compute[192512]: 2025-10-13 16:25:49.145 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:25:49 np0005485008 nova_compute[192512]: 2025-10-13 16:25:49.145 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:25:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:25:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:25:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:25:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:25:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:25:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:25:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:25:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:25:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:25:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:25:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:25:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:25:53 np0005485008 nova_compute[192512]: 2025-10-13 16:25:53.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:53 np0005485008 nova_compute[192512]: 2025-10-13 16:25:53.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:58 np0005485008 nova_compute[192512]: 2025-10-13 16:25:58.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:58 np0005485008 nova_compute[192512]: 2025-10-13 16:25:58.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:25:58 np0005485008 podman[230364]: 2025-10-13 16:25:58.768691183 +0000 UTC m=+0.068473185 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 13 12:26:00 np0005485008 nova_compute[192512]: 2025-10-13 16:26:00.143 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:26:03 np0005485008 nova_compute[192512]: 2025-10-13 16:26:03.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:03 np0005485008 nova_compute[192512]: 2025-10-13 16:26:03.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:05 np0005485008 nova_compute[192512]: 2025-10-13 16:26:05.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:26:05 np0005485008 podman[202884]: time="2025-10-13T16:26:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:26:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:26:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 12:26:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:26:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3478 "" "Go-http-client/1.1"
Oct 13 12:26:08 np0005485008 nova_compute[192512]: 2025-10-13 16:26:08.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:08 np0005485008 nova_compute[192512]: 2025-10-13 16:26:08.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:13 np0005485008 nova_compute[192512]: 2025-10-13 16:26:13.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:13 np0005485008 nova_compute[192512]: 2025-10-13 16:26:13.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:15 np0005485008 podman[230386]: 2025-10-13 16:26:15.766275439 +0000 UTC m=+0.061247210 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 12:26:15 np0005485008 podman[230385]: 2025-10-13 16:26:15.792818353 +0000 UTC m=+0.090173928 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 12:26:15 np0005485008 podman[230388]: 2025-10-13 16:26:15.800053087 +0000 UTC m=+0.084327627 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 12:26:15 np0005485008 podman[230387]: 2025-10-13 16:26:15.801896404 +0000 UTC m=+0.092395407 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 12:26:15 np0005485008 podman[230394]: 2025-10-13 16:26:15.848452858 +0000 UTC m=+0.130452497 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller)
Oct 13 12:26:16 np0005485008 nova_compute[192512]: 2025-10-13 16:26:16.549 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:26:16 np0005485008 nova_compute[192512]: 2025-10-13 16:26:16.550 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 13 12:26:17 np0005485008 nova_compute[192512]: 2025-10-13 16:26:17.493 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:26:17 np0005485008 nova_compute[192512]: 2025-10-13 16:26:17.494 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 13 12:26:18 np0005485008 nova_compute[192512]: 2025-10-13 16:26:18.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:18 np0005485008 nova_compute[192512]: 2025-10-13 16:26:18.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:18 np0005485008 nova_compute[192512]: 2025-10-13 16:26:18.959 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 13 12:26:19 np0005485008 nova_compute[192512]: 2025-10-13 16:26:19.090 2 DEBUG nova.virt.libvirt.driver [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Creating tmpfile /var/lib/nova/instances/tmpi5gal1f_ to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Oct 13 12:26:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:26:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:26:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:26:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:26:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:26:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:26:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:26:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:26:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:26:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:26:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:26:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:26:19 np0005485008 nova_compute[192512]: 2025-10-13 16:26:19.584 2 DEBUG nova.compute.manager [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=73728,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpi5gal1f_',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Oct 13 12:26:20 np0005485008 nova_compute[192512]: 2025-10-13 16:26:20.799 2 DEBUG nova.compute.manager [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpi5gal1f_',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='0503a33e-dafe-4641-9e1d-f91a0a697468',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Oct 13 12:26:20 np0005485008 nova_compute[192512]: 2025-10-13 16:26:20.855 2 DEBUG oslo_concurrency.lockutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-0503a33e-dafe-4641-9e1d-f91a0a697468" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:26:20 np0005485008 nova_compute[192512]: 2025-10-13 16:26:20.856 2 DEBUG oslo_concurrency.lockutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-0503a33e-dafe-4641-9e1d-f91a0a697468" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:26:20 np0005485008 nova_compute[192512]: 2025-10-13 16:26:20.856 2 DEBUG nova.network.neutron [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 12:26:23 np0005485008 nova_compute[192512]: 2025-10-13 16:26:23.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:23 np0005485008 nova_compute[192512]: 2025-10-13 16:26:23.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.220 2 DEBUG nova.network.neutron [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Updating instance_info_cache with network_info: [{"id": "666710ee-8031-4f27-8279-e526e6229929", "address": "fa:16:3e:5d:46:f9", "network": {"id": "6eec20f4-a93b-4c67-a33f-a03051c51d88", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-1854145690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d1e08aa8dd5f4ba4a4f76c59beb6b64f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap666710ee-80", "ovs_interfaceid": "666710ee-8031-4f27-8279-e526e6229929", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.258 2 DEBUG oslo_concurrency.lockutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-0503a33e-dafe-4641-9e1d-f91a0a697468" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.260 2 DEBUG nova.virt.libvirt.driver [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpi5gal1f_',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='0503a33e-dafe-4641-9e1d-f91a0a697468',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.260 2 DEBUG nova.virt.libvirt.driver [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Creating instance directory: /var/lib/nova/instances/0503a33e-dafe-4641-9e1d-f91a0a697468 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.261 2 DEBUG nova.virt.libvirt.driver [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Creating disk.info with the contents: {'/var/lib/nova/instances/0503a33e-dafe-4641-9e1d-f91a0a697468/disk': 'qcow2', '/var/lib/nova/instances/0503a33e-dafe-4641-9e1d-f91a0a697468/disk.config': 'raw'} pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10854#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.261 2 DEBUG nova.virt.libvirt.driver [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Checking to make sure images and backing files are present before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10864#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.262 2 DEBUG nova.objects.instance [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0503a33e-dafe-4641-9e1d-f91a0a697468 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.291 2 DEBUG oslo_concurrency.processutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.348 2 DEBUG oslo_concurrency.processutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.349 2 DEBUG oslo_concurrency.lockutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.350 2 DEBUG oslo_concurrency.lockutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.364 2 DEBUG oslo_concurrency.processutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.424 2 DEBUG oslo_concurrency.processutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.425 2 DEBUG oslo_concurrency.processutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/0503a33e-dafe-4641-9e1d-f91a0a697468/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.454 2 DEBUG oslo_concurrency.processutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7,backing_fmt=raw /var/lib/nova/instances/0503a33e-dafe-4641-9e1d-f91a0a697468/disk 1073741824" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.455 2 DEBUG oslo_concurrency.lockutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "e78aeb1b4b324f4d29394e1a6dc918f94babaeb7" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.456 2 DEBUG oslo_concurrency.processutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.516 2 DEBUG oslo_concurrency.processutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e78aeb1b4b324f4d29394e1a6dc918f94babaeb7 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.517 2 DEBUG nova.virt.disk.api [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Checking if we can resize image /var/lib/nova/instances/0503a33e-dafe-4641-9e1d-f91a0a697468/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.518 2 DEBUG oslo_concurrency.processutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0503a33e-dafe-4641-9e1d-f91a0a697468/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.629 2 DEBUG oslo_concurrency.processutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0503a33e-dafe-4641-9e1d-f91a0a697468/disk --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.631 2 DEBUG nova.virt.disk.api [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Cannot resize image /var/lib/nova/instances/0503a33e-dafe-4641-9e1d-f91a0a697468/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.632 2 DEBUG nova.objects.instance [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lazy-loading 'migration_context' on Instance uuid 0503a33e-dafe-4641-9e1d-f91a0a697468 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.657 2 DEBUG oslo_concurrency.processutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/0503a33e-dafe-4641-9e1d-f91a0a697468/disk.config 485376 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.689 2 DEBUG oslo_concurrency.processutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/0503a33e-dafe-4641-9e1d-f91a0a697468/disk.config 485376" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.692 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Copying file compute-0.ctlplane.example.com:/var/lib/nova/instances/0503a33e-dafe-4641-9e1d-f91a0a697468/disk.config to /var/lib/nova/instances/0503a33e-dafe-4641-9e1d-f91a0a697468 copy_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:103#033[00m
Oct 13 12:26:25 np0005485008 nova_compute[192512]: 2025-10-13 16:26:25.692 2 DEBUG oslo_concurrency.processutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Running cmd (subprocess): scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/0503a33e-dafe-4641-9e1d-f91a0a697468/disk.config /var/lib/nova/instances/0503a33e-dafe-4641-9e1d-f91a0a697468 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:26:26 np0005485008 nova_compute[192512]: 2025-10-13 16:26:26.126 2 DEBUG oslo_concurrency.processutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] CMD "scp -C -r compute-0.ctlplane.example.com:/var/lib/nova/instances/0503a33e-dafe-4641-9e1d-f91a0a697468/disk.config /var/lib/nova/instances/0503a33e-dafe-4641-9e1d-f91a0a697468" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:26:26 np0005485008 nova_compute[192512]: 2025-10-13 16:26:26.127 2 DEBUG nova.virt.libvirt.driver [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Oct 13 12:26:26 np0005485008 nova_compute[192512]: 2025-10-13 16:26:26.128 2 DEBUG nova.virt.libvirt.vif [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T16:24:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-2035283321',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-2035283321',id=37,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T16:24:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8070a861003e4015ac392983e3444a1c',ramdisk_id='',reservation_id='r-wxh0xb45',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-555259936',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-555259936-project-admin'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=None,updated_at=2025-10-13T16:24:52Z,user_data=None,user_id='fa4b59a04fe44b478d878bb3964dfc67',uuid=0503a33e-dafe-4641-9e1d-f91a0a697468,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "666710ee-8031-4f27-8279-e526e6229929", "address": "fa:16:3e:5d:46:f9", "network": {"id": "6eec20f4-a93b-4c67-a33f-a03051c51d88", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-1854145690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d1e08aa8dd5f4ba4a4f76c59beb6b64f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap666710ee-80", "ovs_interfaceid": "666710ee-8031-4f27-8279-e526e6229929", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct 13 12:26:26 np0005485008 nova_compute[192512]: 2025-10-13 16:26:26.128 2 DEBUG nova.network.os_vif_util [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converting VIF {"id": "666710ee-8031-4f27-8279-e526e6229929", "address": "fa:16:3e:5d:46:f9", "network": {"id": "6eec20f4-a93b-4c67-a33f-a03051c51d88", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-1854145690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d1e08aa8dd5f4ba4a4f76c59beb6b64f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap666710ee-80", "ovs_interfaceid": "666710ee-8031-4f27-8279-e526e6229929", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:26:26 np0005485008 nova_compute[192512]: 2025-10-13 16:26:26.129 2 DEBUG nova.network.os_vif_util [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:46:f9,bridge_name='br-int',has_traffic_filtering=True,id=666710ee-8031-4f27-8279-e526e6229929,network=Network(6eec20f4-a93b-4c67-a33f-a03051c51d88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap666710ee-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:26:26 np0005485008 nova_compute[192512]: 2025-10-13 16:26:26.130 2 DEBUG os_vif [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:46:f9,bridge_name='br-int',has_traffic_filtering=True,id=666710ee-8031-4f27-8279-e526e6229929,network=Network(6eec20f4-a93b-4c67-a33f-a03051c51d88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap666710ee-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct 13 12:26:26 np0005485008 nova_compute[192512]: 2025-10-13 16:26:26.130 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:26 np0005485008 nova_compute[192512]: 2025-10-13 16:26:26.131 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:26:26 np0005485008 nova_compute[192512]: 2025-10-13 16:26:26.131 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:26:26 np0005485008 nova_compute[192512]: 2025-10-13 16:26:26.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:26 np0005485008 nova_compute[192512]: 2025-10-13 16:26:26.135 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap666710ee-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:26:26 np0005485008 nova_compute[192512]: 2025-10-13 16:26:26.136 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap666710ee-80, col_values=(('external_ids', {'iface-id': '666710ee-8031-4f27-8279-e526e6229929', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:46:f9', 'vm-uuid': '0503a33e-dafe-4641-9e1d-f91a0a697468'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:26:26 np0005485008 NetworkManager[51587]: <info>  [1760372786.1390] manager: (tap666710ee-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Oct 13 12:26:26 np0005485008 nova_compute[192512]: 2025-10-13 16:26:26.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:26:26 np0005485008 nova_compute[192512]: 2025-10-13 16:26:26.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:26 np0005485008 nova_compute[192512]: 2025-10-13 16:26:26.146 2 INFO os_vif [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:46:f9,bridge_name='br-int',has_traffic_filtering=True,id=666710ee-8031-4f27-8279-e526e6229929,network=Network(6eec20f4-a93b-4c67-a33f-a03051c51d88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap666710ee-80')#033[00m
Oct 13 12:26:26 np0005485008 nova_compute[192512]: 2025-10-13 16:26:26.147 2 DEBUG nova.virt.libvirt.driver [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Oct 13 12:26:26 np0005485008 nova_compute[192512]: 2025-10-13 16:26:26.147 2 DEBUG nova.compute.manager [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpi5gal1f_',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='0503a33e-dafe-4641-9e1d-f91a0a697468',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Oct 13 12:26:27 np0005485008 nova_compute[192512]: 2025-10-13 16:26:27.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:27 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:27.589 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:26:27 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:27.590 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 12:26:27 np0005485008 nova_compute[192512]: 2025-10-13 16:26:27.945 2 DEBUG nova.network.neutron [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Port 666710ee-8031-4f27-8279-e526e6229929 updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Oct 13 12:26:27 np0005485008 nova_compute[192512]: 2025-10-13 16:26:27.947 2 DEBUG nova.compute.manager [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=True,disk_available_mb=73728,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpi5gal1f_',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='qcow2',instance_relative_path='0503a33e-dafe-4641-9e1d-f91a0a697468',is_shared_block_storage=False,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Oct 13 12:26:28 np0005485008 systemd[1]: Starting libvirt proxy daemon...
Oct 13 12:26:28 np0005485008 systemd[1]: Started libvirt proxy daemon.
Oct 13 12:26:28 np0005485008 NetworkManager[51587]: <info>  [1760372788.3435] manager: (tap666710ee-80): new Tun device (/org/freedesktop/NetworkManager/Devices/115)
Oct 13 12:26:28 np0005485008 kernel: tap666710ee-80: entered promiscuous mode
Oct 13 12:26:28 np0005485008 ovn_controller[94758]: 2025-10-13T16:26:28Z|00316|binding|INFO|Claiming lport 666710ee-8031-4f27-8279-e526e6229929 for this additional chassis.
Oct 13 12:26:28 np0005485008 nova_compute[192512]: 2025-10-13 16:26:28.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:28 np0005485008 ovn_controller[94758]: 2025-10-13T16:26:28Z|00317|binding|INFO|666710ee-8031-4f27-8279-e526e6229929: Claiming fa:16:3e:5d:46:f9 10.100.0.3
Oct 13 12:26:28 np0005485008 ovn_controller[94758]: 2025-10-13T16:26:28Z|00318|binding|INFO|Setting lport 666710ee-8031-4f27-8279-e526e6229929 ovn-installed in OVS
Oct 13 12:26:28 np0005485008 nova_compute[192512]: 2025-10-13 16:26:28.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:28 np0005485008 nova_compute[192512]: 2025-10-13 16:26:28.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:28 np0005485008 systemd-udevd[230539]: Network interface NamePolicy= disabled on kernel command line.
Oct 13 12:26:28 np0005485008 systemd-machined[152551]: New machine qemu-27-instance-00000025.
Oct 13 12:26:28 np0005485008 NetworkManager[51587]: <info>  [1760372788.4043] device (tap666710ee-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 13 12:26:28 np0005485008 NetworkManager[51587]: <info>  [1760372788.4053] device (tap666710ee-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 13 12:26:28 np0005485008 systemd[1]: Started Virtual Machine qemu-27-instance-00000025.
Oct 13 12:26:28 np0005485008 nova_compute[192512]: 2025-10-13 16:26:28.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:29 np0005485008 nova_compute[192512]: 2025-10-13 16:26:29.397 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760372789.3966436, 0503a33e-dafe-4641-9e1d-f91a0a697468 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:26:29 np0005485008 nova_compute[192512]: 2025-10-13 16:26:29.399 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] VM Started (Lifecycle Event)#033[00m
Oct 13 12:26:29 np0005485008 nova_compute[192512]: 2025-10-13 16:26:29.432 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:26:29 np0005485008 podman[230572]: 2025-10-13 16:26:29.792939159 +0000 UTC m=+0.092799779 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, release=1755695350, io.openshift.expose-services=, name=ubi9-minimal, vcs-type=git, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=openstack_network_exporter)
Oct 13 12:26:30 np0005485008 nova_compute[192512]: 2025-10-13 16:26:30.338 2 DEBUG nova.virt.driver [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] Emitting event <LifecycleEvent: 1760372790.3377044, 0503a33e-dafe-4641-9e1d-f91a0a697468 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:26:30 np0005485008 nova_compute[192512]: 2025-10-13 16:26:30.338 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] VM Resumed (Lifecycle Event)#033[00m
Oct 13 12:26:31 np0005485008 nova_compute[192512]: 2025-10-13 16:26:31.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:32 np0005485008 nova_compute[192512]: 2025-10-13 16:26:32.828 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:26:32 np0005485008 nova_compute[192512]: 2025-10-13 16:26:32.834 2 DEBUG nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct 13 12:26:32 np0005485008 nova_compute[192512]: 2025-10-13 16:26:32.856 2 INFO nova.compute.manager [None req-180d775d-e24a-4079-a90f-d632e35f2e5f - - - - - -] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct 13 12:26:33 np0005485008 nova_compute[192512]: 2025-10-13 16:26:33.212 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:26:33 np0005485008 nova_compute[192512]: 2025-10-13 16:26:33.284 2 WARNING nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] While synchronizing instance power states, found 1 instances in the database and 2 instances on the hypervisor.#033[00m
Oct 13 12:26:33 np0005485008 nova_compute[192512]: 2025-10-13 16:26:33.284 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Triggering sync for uuid d43a5291-85ee-427c-b4b1-aa493ae09f02 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct 13 12:26:33 np0005485008 nova_compute[192512]: 2025-10-13 16:26:33.285 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "d43a5291-85ee-427c-b4b1-aa493ae09f02" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:26:33 np0005485008 nova_compute[192512]: 2025-10-13 16:26:33.286 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "d43a5291-85ee-427c-b4b1-aa493ae09f02" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:26:33 np0005485008 nova_compute[192512]: 2025-10-13 16:26:33.315 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "d43a5291-85ee-427c-b4b1-aa493ae09f02" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.029s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:26:33 np0005485008 nova_compute[192512]: 2025-10-13 16:26:33.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:33.996 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:33.997 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:26:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:33.998 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:26:34 np0005485008 ovn_controller[94758]: 2025-10-13T16:26:34Z|00319|binding|INFO|Claiming lport 666710ee-8031-4f27-8279-e526e6229929 for this chassis.
Oct 13 12:26:34 np0005485008 ovn_controller[94758]: 2025-10-13T16:26:34Z|00320|binding|INFO|666710ee-8031-4f27-8279-e526e6229929: Claiming fa:16:3e:5d:46:f9 10.100.0.3
Oct 13 12:26:34 np0005485008 ovn_controller[94758]: 2025-10-13T16:26:34Z|00321|binding|INFO|Setting lport 666710ee-8031-4f27-8279-e526e6229929 up in Southbound
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.272 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:46:f9 10.100.0.3'], port_security=['fa:16:3e:5d:46:f9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0503a33e-dafe-4641-9e1d-f91a0a697468', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6eec20f4-a93b-4c67-a33f-a03051c51d88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8070a861003e4015ac392983e3444a1c', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'b4a7b065-b6b3-4ca9-a2f2-9ce57a948736', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d76a7fa5-2ace-4961-8c3f-1bc066b9377f, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=666710ee-8031-4f27-8279-e526e6229929) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.274 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 666710ee-8031-4f27-8279-e526e6229929 in datapath 6eec20f4-a93b-4c67-a33f-a03051c51d88 bound to our chassis#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.276 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6eec20f4-a93b-4c67-a33f-a03051c51d88#033[00m
Oct 13 12:26:34 np0005485008 ovn_controller[94758]: 2025-10-13T16:26:34Z|00322|binding|INFO|Claiming lport 666710ee-8031-4f27-8279-e526e6229929 for this additional chassis.
Oct 13 12:26:34 np0005485008 ovn_controller[94758]: 2025-10-13T16:26:34Z|00323|binding|INFO|666710ee-8031-4f27-8279-e526e6229929: Claiming fa:16:3e:5d:46:f9 10.100.0.3
Oct 13 12:26:34 np0005485008 ovn_controller[94758]: 2025-10-13T16:26:34Z|00324|binding|INFO|Removing lport 666710ee-8031-4f27-8279-e526e6229929 ovn-installed in OVS
Oct 13 12:26:34 np0005485008 ovn_controller[94758]: 2025-10-13T16:26:34Z|00325|binding|INFO|Setting lport 666710ee-8031-4f27-8279-e526e6229929 down in Southbound
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.297 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[094af86a-877e-485a-be51-d09d3fa35a8b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.306 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:46:f9 10.100.0.3'], port_security=['fa:16:3e:5d:46:f9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp'}, parent_port=[], requested_additional_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0503a33e-dafe-4641-9e1d-f91a0a697468', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6eec20f4-a93b-4c67-a33f-a03051c51d88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8070a861003e4015ac392983e3444a1c', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'b4a7b065-b6b3-4ca9-a2f2-9ce57a948736', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d76a7fa5-2ace-4961-8c3f-1bc066b9377f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=666710ee-8031-4f27-8279-e526e6229929) old=Port_Binding(up=[True], additional_chassis=[], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:26:34 np0005485008 ovn_controller[94758]: 2025-10-13T16:26:34Z|00326|binding|INFO|Setting lport 666710ee-8031-4f27-8279-e526e6229929 ovn-installed in OVS
Oct 13 12:26:34 np0005485008 nova_compute[192512]: 2025-10-13 16:26:34.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:34 np0005485008 nova_compute[192512]: 2025-10-13 16:26:34.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.339 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[d4f9c840-534d-4391-8840-ea00d1e4cf6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.344 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[dfd59990-50f9-41ed-bef3-13600a08a140]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.373 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c5fc01-8af8-4044-bf2e-4bbc08afa86c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.401 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[c74dfc0d-8393-48fe-b481-0c94cbff2828]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6eec20f4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:dc:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 15, 'tx_packets': 5, 'rx_bytes': 1126, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 15, 'tx_packets': 5, 'rx_bytes': 1126, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624731, 'reachable_time': 23713, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 230599, 'error': None, 'target': 'ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.423 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[c6cea9ff-7d75-4f6c-b925-08f760bcf156]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6eec20f4-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624741, 'tstamp': 624741}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 230600, 'error': None, 'target': 'ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6eec20f4-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624744, 'tstamp': 624744}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 230600, 'error': None, 'target': 'ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.425 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6eec20f4-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:26:34 np0005485008 nova_compute[192512]: 2025-10-13 16:26:34.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:34 np0005485008 nova_compute[192512]: 2025-10-13 16:26:34.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.429 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6eec20f4-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.430 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.430 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6eec20f4-a0, col_values=(('external_ids', {'iface-id': '673b3766-45cd-4d9e-aa60-c456a25db44e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.431 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.432 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 666710ee-8031-4f27-8279-e526e6229929 in datapath 6eec20f4-a93b-4c67-a33f-a03051c51d88 unbound from our chassis#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.433 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6eec20f4-a93b-4c67-a33f-a03051c51d88#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.449 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[b5bca76d-7def-416d-a0a1-2b8583c0e05f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.491 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[6455f392-99e2-408b-9228-c02c228d1ae1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.495 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[aa4d53c7-7eed-42f9-ab4f-a07c85e14797]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:34 np0005485008 nova_compute[192512]: 2025-10-13 16:26:34.521 2 INFO nova.compute.manager [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Post operation of migration started#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.533 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[712ccb65-f2cd-4d19-8832-39163aec6353]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.554 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[91cce2f6-c940-4b6b-8987-6a9b9b95d488]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6eec20f4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:dc:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 15, 'tx_packets': 7, 'rx_bytes': 1126, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 15, 'tx_packets': 7, 'rx_bytes': 1126, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624731, 'reachable_time': 23713, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 230606, 'error': None, 'target': 'ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.571 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[7a6a3236-f5b7-42e2-90a8-8c0a4076d11f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6eec20f4-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624741, 'tstamp': 624741}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 230607, 'error': None, 'target': 'ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6eec20f4-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624744, 'tstamp': 624744}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 230607, 'error': None, 'target': 'ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.573 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6eec20f4-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:26:34 np0005485008 nova_compute[192512]: 2025-10-13 16:26:34.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.576 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6eec20f4-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.577 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.577 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6eec20f4-a0, col_values=(('external_ids', {'iface-id': '673b3766-45cd-4d9e-aa60-c456a25db44e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:26:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:34.578 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:26:35 np0005485008 nova_compute[192512]: 2025-10-13 16:26:35.404 2 DEBUG oslo_concurrency.lockutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "refresh_cache-0503a33e-dafe-4641-9e1d-f91a0a697468" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:26:35 np0005485008 nova_compute[192512]: 2025-10-13 16:26:35.405 2 DEBUG oslo_concurrency.lockutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquired lock "refresh_cache-0503a33e-dafe-4641-9e1d-f91a0a697468" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:26:35 np0005485008 nova_compute[192512]: 2025-10-13 16:26:35.406 2 DEBUG nova.network.neutron [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct 13 12:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:35.593 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:26:35 np0005485008 podman[202884]: time="2025-10-13T16:26:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:26:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:26:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20780 "" "Go-http-client/1.1"
Oct 13 12:26:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:26:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3479 "" "Go-http-client/1.1"
Oct 13 12:26:35 np0005485008 ovn_controller[94758]: 2025-10-13T16:26:35Z|00327|binding|INFO|Claiming lport 666710ee-8031-4f27-8279-e526e6229929 for this chassis.
Oct 13 12:26:35 np0005485008 ovn_controller[94758]: 2025-10-13T16:26:35Z|00328|binding|INFO|666710ee-8031-4f27-8279-e526e6229929: Claiming fa:16:3e:5d:46:f9 10.100.0.3
Oct 13 12:26:35 np0005485008 ovn_controller[94758]: 2025-10-13T16:26:35Z|00329|binding|INFO|Setting lport 666710ee-8031-4f27-8279-e526e6229929 up in Southbound
Oct 13 12:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:35.810 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:46:f9 10.100.0.3'], port_security=['fa:16:3e:5d:46:f9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0503a33e-dafe-4641-9e1d-f91a0a697468', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6eec20f4-a93b-4c67-a33f-a03051c51d88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8070a861003e4015ac392983e3444a1c', 'neutron:revision_number': '13', 'neutron:security_group_ids': 'b4a7b065-b6b3-4ca9-a2f2-9ce57a948736', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d76a7fa5-2ace-4961-8c3f-1bc066b9377f, chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=666710ee-8031-4f27-8279-e526e6229929) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:35.811 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 666710ee-8031-4f27-8279-e526e6229929 in datapath 6eec20f4-a93b-4c67-a33f-a03051c51d88 bound to our chassis#033[00m
Oct 13 12:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:35.813 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6eec20f4-a93b-4c67-a33f-a03051c51d88#033[00m
Oct 13 12:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:35.836 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[1f36c027-c99d-45ea-8dc1-3a3d8dc1b256]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:35.872 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[ed8a19ac-4d16-43e9-852e-dfc0db7a478d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:35.876 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[27de6a25-2392-44d0-a834-685c0b84edd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:35.910 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[0285771d-1f9d-4c67-9f44-6c40543b5de9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:35.931 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[2a60fed8-ebaf-4315-9a33-f8168d355301]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6eec20f4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:dc:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 15, 'tx_packets': 9, 'rx_bytes': 1126, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 15, 'tx_packets': 9, 'rx_bytes': 1126, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624731, 'reachable_time': 23713, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 230613, 'error': None, 'target': 'ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:35.950 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[7bbb3932-2e24-4291-9a69-b8ebf8e2cf09]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6eec20f4-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624741, 'tstamp': 624741}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 230614, 'error': None, 'target': 'ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6eec20f4-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624744, 'tstamp': 624744}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 230614, 'error': None, 'target': 'ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:35.952 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6eec20f4-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:26:35 np0005485008 nova_compute[192512]: 2025-10-13 16:26:35.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:35.956 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6eec20f4-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:35.957 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:35.957 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6eec20f4-a0, col_values=(('external_ids', {'iface-id': '673b3766-45cd-4d9e-aa60-c456a25db44e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:26:35 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:35.957 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:26:36 np0005485008 nova_compute[192512]: 2025-10-13 16:26:36.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:38 np0005485008 nova_compute[192512]: 2025-10-13 16:26:38.232 2 DEBUG nova.network.neutron [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Updating instance_info_cache with network_info: [{"id": "666710ee-8031-4f27-8279-e526e6229929", "address": "fa:16:3e:5d:46:f9", "network": {"id": "6eec20f4-a93b-4c67-a33f-a03051c51d88", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-1854145690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d1e08aa8dd5f4ba4a4f76c59beb6b64f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap666710ee-80", "ovs_interfaceid": "666710ee-8031-4f27-8279-e526e6229929", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:26:38 np0005485008 nova_compute[192512]: 2025-10-13 16:26:38.256 2 DEBUG oslo_concurrency.lockutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Releasing lock "refresh_cache-0503a33e-dafe-4641-9e1d-f91a0a697468" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:26:38 np0005485008 nova_compute[192512]: 2025-10-13 16:26:38.275 2 DEBUG oslo_concurrency.lockutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:26:38 np0005485008 nova_compute[192512]: 2025-10-13 16:26:38.276 2 DEBUG oslo_concurrency.lockutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:26:38 np0005485008 nova_compute[192512]: 2025-10-13 16:26:38.276 2 DEBUG oslo_concurrency.lockutils [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:26:38 np0005485008 nova_compute[192512]: 2025-10-13 16:26:38.280 2 INFO nova.virt.libvirt.driver [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Oct 13 12:26:38 np0005485008 virtqemud[192082]: Domain id=27 name='instance-00000025' uuid=0503a33e-dafe-4641-9e1d-f91a0a697468 is tainted: custom-monitor
Oct 13 12:26:38 np0005485008 nova_compute[192512]: 2025-10-13 16:26:38.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:39 np0005485008 nova_compute[192512]: 2025-10-13 16:26:39.289 2 INFO nova.virt.libvirt.driver [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Oct 13 12:26:39 np0005485008 nova_compute[192512]: 2025-10-13 16:26:39.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:26:39 np0005485008 nova_compute[192512]: 2025-10-13 16:26:39.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:26:40 np0005485008 nova_compute[192512]: 2025-10-13 16:26:40.295 2 INFO nova.virt.libvirt.driver [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Oct 13 12:26:40 np0005485008 nova_compute[192512]: 2025-10-13 16:26:40.300 2 DEBUG nova.compute.manager [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:26:41 np0005485008 nova_compute[192512]: 2025-10-13 16:26:41.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:42 np0005485008 nova_compute[192512]: 2025-10-13 16:26:42.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:26:42 np0005485008 nova_compute[192512]: 2025-10-13 16:26:42.426 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:26:42 np0005485008 nova_compute[192512]: 2025-10-13 16:26:42.426 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:26:42 np0005485008 nova_compute[192512]: 2025-10-13 16:26:42.426 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:26:43 np0005485008 nova_compute[192512]: 2025-10-13 16:26:43.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:43 np0005485008 nova_compute[192512]: 2025-10-13 16:26:43.839 2 DEBUG nova.objects.instance [None req-d662aa8f-87b5-4f55-8186-678b15eef69b f7a2ccecec584527951bbd232947ee9e 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct 13 12:26:45 np0005485008 nova_compute[192512]: 2025-10-13 16:26:45.229 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "refresh_cache-d43a5291-85ee-427c-b4b1-aa493ae09f02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct 13 12:26:45 np0005485008 nova_compute[192512]: 2025-10-13 16:26:45.230 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquired lock "refresh_cache-d43a5291-85ee-427c-b4b1-aa493ae09f02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct 13 12:26:45 np0005485008 nova_compute[192512]: 2025-10-13 16:26:45.230 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct 13 12:26:45 np0005485008 nova_compute[192512]: 2025-10-13 16:26:45.230 2 DEBUG nova.objects.instance [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d43a5291-85ee-427c-b4b1-aa493ae09f02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:26:46 np0005485008 nova_compute[192512]: 2025-10-13 16:26:46.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:46 np0005485008 podman[230618]: 2025-10-13 16:26:46.755514349 +0000 UTC m=+0.053863201 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 13 12:26:46 np0005485008 podman[230619]: 2025-10-13 16:26:46.777424109 +0000 UTC m=+0.069447995 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 12:26:46 np0005485008 podman[230616]: 2025-10-13 16:26:46.801699571 +0000 UTC m=+0.102712716 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 12:26:46 np0005485008 podman[230627]: 2025-10-13 16:26:46.806423498 +0000 UTC m=+0.093346096 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 12:26:46 np0005485008 podman[230617]: 2025-10-13 16:26:46.807688657 +0000 UTC m=+0.108063812 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 12:26:48 np0005485008 nova_compute[192512]: 2025-10-13 16:26:48.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:48 np0005485008 nova_compute[192512]: 2025-10-13 16:26:48.952 2 DEBUG nova.network.neutron [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Updating instance_info_cache with network_info: [{"id": "1ad5c353-b67b-40e3-bd20-b089f31d32e9", "address": "fa:16:3e:0f:8b:ca", "network": {"id": "6eec20f4-a93b-4c67-a33f-a03051c51d88", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-1854145690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d1e08aa8dd5f4ba4a4f76c59beb6b64f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ad5c353-b6", "ovs_interfaceid": "1ad5c353-b67b-40e3-bd20-b089f31d32e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:26:48 np0005485008 nova_compute[192512]: 2025-10-13 16:26:48.979 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Releasing lock "refresh_cache-d43a5291-85ee-427c-b4b1-aa493ae09f02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct 13 12:26:48 np0005485008 nova_compute[192512]: 2025-10-13 16:26:48.979 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct 13 12:26:48 np0005485008 nova_compute[192512]: 2025-10-13 16:26:48.980 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:26:48 np0005485008 nova_compute[192512]: 2025-10-13 16:26:48.980 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:26:48 np0005485008 nova_compute[192512]: 2025-10-13 16:26:48.980 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:26:48 np0005485008 nova_compute[192512]: 2025-10-13 16:26:48.980 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:26:48 np0005485008 nova_compute[192512]: 2025-10-13 16:26:48.980 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:26:48 np0005485008 nova_compute[192512]: 2025-10-13 16:26:48.981 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.009 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.010 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.010 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.010 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.097 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.167 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.168 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.230 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.238 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0503a33e-dafe-4641-9e1d-f91a0a697468/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.298 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0503a33e-dafe-4641-9e1d-f91a0a697468/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.299 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0503a33e-dafe-4641-9e1d-f91a0a697468/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.360 2 DEBUG oslo_concurrency.processutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0503a33e-dafe-4641-9e1d-f91a0a697468/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:26:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:26:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:26:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:26:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:26:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:26:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:26:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:26:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:26:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:26:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:26:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:26:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.533 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.535 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5524MB free_disk=73.4050064086914GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.535 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.535 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.649 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance d43a5291-85ee-427c-b4b1-aa493ae09f02 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.649 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Instance 0503a33e-dafe-4641-9e1d-f91a0a697468 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.649 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.649 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.682 2 DEBUG oslo_concurrency.lockutils [None req-6f16fdad-15c2-4b73-918e-d62c2c0c664a fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Acquiring lock "d43a5291-85ee-427c-b4b1-aa493ae09f02" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.682 2 DEBUG oslo_concurrency.lockutils [None req-6f16fdad-15c2-4b73-918e-d62c2c0c664a fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lock "d43a5291-85ee-427c-b4b1-aa493ae09f02" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.683 2 DEBUG oslo_concurrency.lockutils [None req-6f16fdad-15c2-4b73-918e-d62c2c0c664a fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Acquiring lock "d43a5291-85ee-427c-b4b1-aa493ae09f02-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.683 2 DEBUG oslo_concurrency.lockutils [None req-6f16fdad-15c2-4b73-918e-d62c2c0c664a fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lock "d43a5291-85ee-427c-b4b1-aa493ae09f02-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.683 2 DEBUG oslo_concurrency.lockutils [None req-6f16fdad-15c2-4b73-918e-d62c2c0c664a fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lock "d43a5291-85ee-427c-b4b1-aa493ae09f02-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.684 2 INFO nova.compute.manager [None req-6f16fdad-15c2-4b73-918e-d62c2c0c664a fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Terminating instance#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.685 2 DEBUG nova.compute.manager [None req-6f16fdad-15c2-4b73-918e-d62c2c0c664a fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 13 12:26:49 np0005485008 kernel: tap1ad5c353-b6 (unregistering): left promiscuous mode
Oct 13 12:26:49 np0005485008 NetworkManager[51587]: <info>  [1760372809.7181] device (tap1ad5c353-b6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:49 np0005485008 ovn_controller[94758]: 2025-10-13T16:26:49Z|00330|binding|INFO|Releasing lport 1ad5c353-b67b-40e3-bd20-b089f31d32e9 from this chassis (sb_readonly=0)
Oct 13 12:26:49 np0005485008 ovn_controller[94758]: 2025-10-13T16:26:49Z|00331|binding|INFO|Setting lport 1ad5c353-b67b-40e3-bd20-b089f31d32e9 down in Southbound
Oct 13 12:26:49 np0005485008 ovn_controller[94758]: 2025-10-13T16:26:49Z|00332|binding|INFO|Removing iface tap1ad5c353-b6 ovn-installed in OVS
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:49 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:49.737 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:8b:ca 10.100.0.13'], port_security=['fa:16:3e:0f:8b:ca 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd43a5291-85ee-427c-b4b1-aa493ae09f02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6eec20f4-a93b-4c67-a33f-a03051c51d88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8070a861003e4015ac392983e3444a1c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b4a7b065-b6b3-4ca9-a2f2-9ce57a948736', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d76a7fa5-2ace-4961-8c3f-1bc066b9377f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=1ad5c353-b67b-40e3-bd20-b089f31d32e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:26:49 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:49.739 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 1ad5c353-b67b-40e3-bd20-b089f31d32e9 in datapath 6eec20f4-a93b-4c67-a33f-a03051c51d88 unbound from our chassis#033[00m
Oct 13 12:26:49 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:49.740 103642 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6eec20f4-a93b-4c67-a33f-a03051c51d88#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:49 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:49.762 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[3a896bc1-725d-4a25-ae47-72a920d289d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:49 np0005485008 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000026.scope: Deactivated successfully.
Oct 13 12:26:49 np0005485008 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000026.scope: Consumed 16.232s CPU time.
Oct 13 12:26:49 np0005485008 systemd-machined[152551]: Machine qemu-26-instance-00000026 terminated.
Oct 13 12:26:49 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:49.794 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[1ba7bb55-7f0e-40cb-b489-a7ed741532f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:49 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:49.798 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[9091e5ad-06f3-4066-9c5b-0d9a70e898cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.815 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:26:49 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:49.827 214979 DEBUG oslo.privsep.daemon [-] privsep: reply[5e948566-8ce7-46d8-bc2c-42742e55e668]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.837 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:26:49 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:49.845 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[98f446d4-b11b-47bc-8d87-60e644fe42e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6eec20f4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:dc:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 11, 'rx_bytes': 1756, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 11, 'rx_bytes': 1756, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624731, 'reachable_time': 23713, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 230746, 'error': None, 'target': 'ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:49 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:49.862 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[2282fac9-4086-4086-90ab-cc6e57b7108f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6eec20f4-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624741, 'tstamp': 624741}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 230747, 'error': None, 'target': 'ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6eec20f4-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624744, 'tstamp': 624744}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 230747, 'error': None, 'target': 'ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:49 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:49.863 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6eec20f4-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:49 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:49.869 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6eec20f4-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:26:49 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:49.870 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:26:49 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:49.870 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6eec20f4-a0, col_values=(('external_ids', {'iface-id': '673b3766-45cd-4d9e-aa60-c456a25db44e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:26:49 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:49.870 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.901 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.902 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.366s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.953 2 INFO nova.virt.libvirt.driver [-] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Instance destroyed successfully.#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.953 2 DEBUG nova.objects.instance [None req-6f16fdad-15c2-4b73-918e-d62c2c0c664a fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lazy-loading 'resources' on Instance uuid d43a5291-85ee-427c-b4b1-aa493ae09f02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:26:49 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.999 2 DEBUG nova.virt.libvirt.vif [None req-6f16fdad-15c2-4b73-918e-d62c2c0c664a fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-13T16:25:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-375445520',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-375445520',id=38,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T16:25:10Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8070a861003e4015ac392983e3444a1c',ramdisk_id='',reservation_id='r-mumyo7w1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-555259936',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-555259936-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T16:25:10Z,user_data=None,user_id='fa4b59a04fe44b478d878bb3964dfc67',uuid=d43a5291-85ee-427c-b4b1-aa493ae09f02,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1ad5c353-b67b-40e3-bd20-b089f31d32e9", "address": "fa:16:3e:0f:8b:ca", "network": {"id": "6eec20f4-a93b-4c67-a33f-a03051c51d88", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-1854145690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d1e08aa8dd5f4ba4a4f76c59beb6b64f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ad5c353-b6", "ovs_interfaceid": "1ad5c353-b67b-40e3-bd20-b089f31d32e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 12:26:50 np0005485008 nova_compute[192512]: 2025-10-13 16:26:49.999 2 DEBUG nova.network.os_vif_util [None req-6f16fdad-15c2-4b73-918e-d62c2c0c664a fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Converting VIF {"id": "1ad5c353-b67b-40e3-bd20-b089f31d32e9", "address": "fa:16:3e:0f:8b:ca", "network": {"id": "6eec20f4-a93b-4c67-a33f-a03051c51d88", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-1854145690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d1e08aa8dd5f4ba4a4f76c59beb6b64f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ad5c353-b6", "ovs_interfaceid": "1ad5c353-b67b-40e3-bd20-b089f31d32e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:26:50 np0005485008 nova_compute[192512]: 2025-10-13 16:26:50.000 2 DEBUG nova.network.os_vif_util [None req-6f16fdad-15c2-4b73-918e-d62c2c0c664a fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0f:8b:ca,bridge_name='br-int',has_traffic_filtering=True,id=1ad5c353-b67b-40e3-bd20-b089f31d32e9,network=Network(6eec20f4-a93b-4c67-a33f-a03051c51d88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ad5c353-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:26:50 np0005485008 nova_compute[192512]: 2025-10-13 16:26:50.001 2 DEBUG os_vif [None req-6f16fdad-15c2-4b73-918e-d62c2c0c664a fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:8b:ca,bridge_name='br-int',has_traffic_filtering=True,id=1ad5c353-b67b-40e3-bd20-b089f31d32e9,network=Network(6eec20f4-a93b-4c67-a33f-a03051c51d88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ad5c353-b6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 12:26:50 np0005485008 nova_compute[192512]: 2025-10-13 16:26:50.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:50 np0005485008 nova_compute[192512]: 2025-10-13 16:26:50.002 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ad5c353-b6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:26:50 np0005485008 nova_compute[192512]: 2025-10-13 16:26:50.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:50 np0005485008 nova_compute[192512]: 2025-10-13 16:26:50.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:26:50 np0005485008 nova_compute[192512]: 2025-10-13 16:26:50.057 2 INFO os_vif [None req-6f16fdad-15c2-4b73-918e-d62c2c0c664a fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:8b:ca,bridge_name='br-int',has_traffic_filtering=True,id=1ad5c353-b67b-40e3-bd20-b089f31d32e9,network=Network(6eec20f4-a93b-4c67-a33f-a03051c51d88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ad5c353-b6')#033[00m
Oct 13 12:26:50 np0005485008 nova_compute[192512]: 2025-10-13 16:26:50.058 2 INFO nova.virt.libvirt.driver [None req-6f16fdad-15c2-4b73-918e-d62c2c0c664a fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Deleting instance files /var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02_del#033[00m
Oct 13 12:26:50 np0005485008 nova_compute[192512]: 2025-10-13 16:26:50.058 2 INFO nova.virt.libvirt.driver [None req-6f16fdad-15c2-4b73-918e-d62c2c0c664a fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Deletion of /var/lib/nova/instances/d43a5291-85ee-427c-b4b1-aa493ae09f02_del complete#033[00m
Oct 13 12:26:50 np0005485008 nova_compute[192512]: 2025-10-13 16:26:50.124 2 INFO nova.compute.manager [None req-6f16fdad-15c2-4b73-918e-d62c2c0c664a fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Took 0.44 seconds to destroy the instance on the hypervisor.#033[00m
Oct 13 12:26:50 np0005485008 nova_compute[192512]: 2025-10-13 16:26:50.125 2 DEBUG oslo.service.loopingcall [None req-6f16fdad-15c2-4b73-918e-d62c2c0c664a fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 13 12:26:50 np0005485008 nova_compute[192512]: 2025-10-13 16:26:50.125 2 DEBUG nova.compute.manager [-] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 13 12:26:50 np0005485008 nova_compute[192512]: 2025-10-13 16:26:50.125 2 DEBUG nova.network.neutron [-] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 13 12:26:50 np0005485008 nova_compute[192512]: 2025-10-13 16:26:50.251 2 DEBUG nova.compute.manager [req-12550a48-3799-4730-b1c7-eff5dbcd6e7a req-ae70c1dd-7925-4792-b9ae-6a3645543796 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Received event network-vif-unplugged-1ad5c353-b67b-40e3-bd20-b089f31d32e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:26:50 np0005485008 nova_compute[192512]: 2025-10-13 16:26:50.251 2 DEBUG oslo_concurrency.lockutils [req-12550a48-3799-4730-b1c7-eff5dbcd6e7a req-ae70c1dd-7925-4792-b9ae-6a3645543796 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "d43a5291-85ee-427c-b4b1-aa493ae09f02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:26:50 np0005485008 nova_compute[192512]: 2025-10-13 16:26:50.251 2 DEBUG oslo_concurrency.lockutils [req-12550a48-3799-4730-b1c7-eff5dbcd6e7a req-ae70c1dd-7925-4792-b9ae-6a3645543796 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "d43a5291-85ee-427c-b4b1-aa493ae09f02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:26:50 np0005485008 nova_compute[192512]: 2025-10-13 16:26:50.252 2 DEBUG oslo_concurrency.lockutils [req-12550a48-3799-4730-b1c7-eff5dbcd6e7a req-ae70c1dd-7925-4792-b9ae-6a3645543796 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "d43a5291-85ee-427c-b4b1-aa493ae09f02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:26:50 np0005485008 nova_compute[192512]: 2025-10-13 16:26:50.252 2 DEBUG nova.compute.manager [req-12550a48-3799-4730-b1c7-eff5dbcd6e7a req-ae70c1dd-7925-4792-b9ae-6a3645543796 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] No waiting events found dispatching network-vif-unplugged-1ad5c353-b67b-40e3-bd20-b089f31d32e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:26:50 np0005485008 nova_compute[192512]: 2025-10-13 16:26:50.252 2 DEBUG nova.compute.manager [req-12550a48-3799-4730-b1c7-eff5dbcd6e7a req-ae70c1dd-7925-4792-b9ae-6a3645543796 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Received event network-vif-unplugged-1ad5c353-b67b-40e3-bd20-b089f31d32e9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 12:26:51 np0005485008 nova_compute[192512]: 2025-10-13 16:26:51.252 2 DEBUG nova.network.neutron [-] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:26:51 np0005485008 nova_compute[192512]: 2025-10-13 16:26:51.283 2 INFO nova.compute.manager [-] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Took 1.16 seconds to deallocate network for instance.#033[00m
Oct 13 12:26:51 np0005485008 nova_compute[192512]: 2025-10-13 16:26:51.366 2 DEBUG oslo_concurrency.lockutils [None req-6f16fdad-15c2-4b73-918e-d62c2c0c664a fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:26:51 np0005485008 nova_compute[192512]: 2025-10-13 16:26:51.367 2 DEBUG oslo_concurrency.lockutils [None req-6f16fdad-15c2-4b73-918e-d62c2c0c664a fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:26:51 np0005485008 nova_compute[192512]: 2025-10-13 16:26:51.422 2 DEBUG nova.compute.manager [req-d20838d5-7e9a-48e7-8064-aa39565464c2 req-0acf81b4-7c30-44a6-a821-0fdf09a92bed 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Received event network-vif-deleted-1ad5c353-b67b-40e3-bd20-b089f31d32e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:26:51 np0005485008 nova_compute[192512]: 2025-10-13 16:26:51.473 2 DEBUG nova.compute.provider_tree [None req-6f16fdad-15c2-4b73-918e-d62c2c0c664a fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:26:51 np0005485008 nova_compute[192512]: 2025-10-13 16:26:51.497 2 DEBUG nova.scheduler.client.report [None req-6f16fdad-15c2-4b73-918e-d62c2c0c664a fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:26:51 np0005485008 nova_compute[192512]: 2025-10-13 16:26:51.536 2 DEBUG oslo_concurrency.lockutils [None req-6f16fdad-15c2-4b73-918e-d62c2c0c664a fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:26:51 np0005485008 nova_compute[192512]: 2025-10-13 16:26:51.592 2 INFO nova.scheduler.client.report [None req-6f16fdad-15c2-4b73-918e-d62c2c0c664a fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Deleted allocations for instance d43a5291-85ee-427c-b4b1-aa493ae09f02#033[00m
Oct 13 12:26:51 np0005485008 nova_compute[192512]: 2025-10-13 16:26:51.703 2 DEBUG oslo_concurrency.lockutils [None req-6f16fdad-15c2-4b73-918e-d62c2c0c664a fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lock "d43a5291-85ee-427c-b4b1-aa493ae09f02" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:26:52 np0005485008 nova_compute[192512]: 2025-10-13 16:26:52.337 2 DEBUG nova.compute.manager [req-39504577-1ead-4609-9c72-31d42c2ac5e3 req-2e1fcee9-9fb1-4c31-b424-359097e7c3e2 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Received event network-vif-plugged-1ad5c353-b67b-40e3-bd20-b089f31d32e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:26:52 np0005485008 nova_compute[192512]: 2025-10-13 16:26:52.338 2 DEBUG oslo_concurrency.lockutils [req-39504577-1ead-4609-9c72-31d42c2ac5e3 req-2e1fcee9-9fb1-4c31-b424-359097e7c3e2 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "d43a5291-85ee-427c-b4b1-aa493ae09f02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:26:52 np0005485008 nova_compute[192512]: 2025-10-13 16:26:52.338 2 DEBUG oslo_concurrency.lockutils [req-39504577-1ead-4609-9c72-31d42c2ac5e3 req-2e1fcee9-9fb1-4c31-b424-359097e7c3e2 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "d43a5291-85ee-427c-b4b1-aa493ae09f02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:26:52 np0005485008 nova_compute[192512]: 2025-10-13 16:26:52.338 2 DEBUG oslo_concurrency.lockutils [req-39504577-1ead-4609-9c72-31d42c2ac5e3 req-2e1fcee9-9fb1-4c31-b424-359097e7c3e2 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "d43a5291-85ee-427c-b4b1-aa493ae09f02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:26:52 np0005485008 nova_compute[192512]: 2025-10-13 16:26:52.338 2 DEBUG nova.compute.manager [req-39504577-1ead-4609-9c72-31d42c2ac5e3 req-2e1fcee9-9fb1-4c31-b424-359097e7c3e2 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] No waiting events found dispatching network-vif-plugged-1ad5c353-b67b-40e3-bd20-b089f31d32e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:26:52 np0005485008 nova_compute[192512]: 2025-10-13 16:26:52.338 2 WARNING nova.compute.manager [req-39504577-1ead-4609-9c72-31d42c2ac5e3 req-2e1fcee9-9fb1-4c31-b424-359097e7c3e2 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Received unexpected event network-vif-plugged-1ad5c353-b67b-40e3-bd20-b089f31d32e9 for instance with vm_state deleted and task_state None.#033[00m
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.120 2 DEBUG oslo_concurrency.lockutils [None req-06aa65ef-1e83-47e3-b479-7c3defb3f940 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Acquiring lock "0503a33e-dafe-4641-9e1d-f91a0a697468" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.121 2 DEBUG oslo_concurrency.lockutils [None req-06aa65ef-1e83-47e3-b479-7c3defb3f940 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lock "0503a33e-dafe-4641-9e1d-f91a0a697468" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.121 2 DEBUG oslo_concurrency.lockutils [None req-06aa65ef-1e83-47e3-b479-7c3defb3f940 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Acquiring lock "0503a33e-dafe-4641-9e1d-f91a0a697468-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.121 2 DEBUG oslo_concurrency.lockutils [None req-06aa65ef-1e83-47e3-b479-7c3defb3f940 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lock "0503a33e-dafe-4641-9e1d-f91a0a697468-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.122 2 DEBUG oslo_concurrency.lockutils [None req-06aa65ef-1e83-47e3-b479-7c3defb3f940 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lock "0503a33e-dafe-4641-9e1d-f91a0a697468-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.123 2 INFO nova.compute.manager [None req-06aa65ef-1e83-47e3-b479-7c3defb3f940 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Terminating instance#033[00m
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.123 2 DEBUG nova.compute.manager [None req-06aa65ef-1e83-47e3-b479-7c3defb3f940 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct 13 12:26:53 np0005485008 kernel: tap666710ee-80 (unregistering): left promiscuous mode
Oct 13 12:26:53 np0005485008 NetworkManager[51587]: <info>  [1760372813.1575] device (tap666710ee-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:53 np0005485008 ovn_controller[94758]: 2025-10-13T16:26:53Z|00333|binding|INFO|Releasing lport 666710ee-8031-4f27-8279-e526e6229929 from this chassis (sb_readonly=0)
Oct 13 12:26:53 np0005485008 ovn_controller[94758]: 2025-10-13T16:26:53Z|00334|binding|INFO|Setting lport 666710ee-8031-4f27-8279-e526e6229929 down in Southbound
Oct 13 12:26:53 np0005485008 ovn_controller[94758]: 2025-10-13T16:26:53Z|00335|binding|INFO|Removing iface tap666710ee-80 ovn-installed in OVS
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:53.201 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:46:f9 10.100.0.3'], port_security=['fa:16:3e:5d:46:f9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '0503a33e-dafe-4641-9e1d-f91a0a697468', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6eec20f4-a93b-4c67-a33f-a03051c51d88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8070a861003e4015ac392983e3444a1c', 'neutron:revision_number': '14', 'neutron:security_group_ids': 'b4a7b065-b6b3-4ca9-a2f2-9ce57a948736', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d76a7fa5-2ace-4961-8c3f-1bc066b9377f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>], logical_port=666710ee-8031-4f27-8279-e526e6229929) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f01ba4b48b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:26:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:53.202 103642 INFO neutron.agent.ovn.metadata.agent [-] Port 666710ee-8031-4f27-8279-e526e6229929 in datapath 6eec20f4-a93b-4c67-a33f-a03051c51d88 unbound from our chassis#033[00m
Oct 13 12:26:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:53.203 103642 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6eec20f4-a93b-4c67-a33f-a03051c51d88, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:53.238 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[4113c8e1-7bce-4de0-a71c-79d3410a1b74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:53.240 103642 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88 namespace which is not needed anymore#033[00m
Oct 13 12:26:53 np0005485008 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000025.scope: Deactivated successfully.
Oct 13 12:26:53 np0005485008 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000025.scope: Consumed 2.327s CPU time.
Oct 13 12:26:53 np0005485008 systemd-machined[152551]: Machine qemu-27-instance-00000025 terminated.
Oct 13 12:26:53 np0005485008 neutron-haproxy-ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88[230098]: [NOTICE]   (230103) : haproxy version is 2.8.14-c23fe91
Oct 13 12:26:53 np0005485008 neutron-haproxy-ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88[230098]: [NOTICE]   (230103) : path to executable is /usr/sbin/haproxy
Oct 13 12:26:53 np0005485008 neutron-haproxy-ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88[230098]: [WARNING]  (230103) : Exiting Master process...
Oct 13 12:26:53 np0005485008 neutron-haproxy-ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88[230098]: [ALERT]    (230103) : Current worker (230105) exited with code 143 (Terminated)
Oct 13 12:26:53 np0005485008 neutron-haproxy-ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88[230098]: [WARNING]  (230103) : All workers exited. Exiting... (0)
Oct 13 12:26:53 np0005485008 systemd[1]: libpod-5e7a0b380c603c375199e30fa9687f5756626e087d70ac54ab1a77a77a22be02.scope: Deactivated successfully.
Oct 13 12:26:53 np0005485008 conmon[230098]: conmon 5e7a0b380c603c375199 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5e7a0b380c603c375199e30fa9687f5756626e087d70ac54ab1a77a77a22be02.scope/container/memory.events
Oct 13 12:26:53 np0005485008 podman[230789]: 2025-10-13 16:26:53.374143804 +0000 UTC m=+0.044236063 container died 5e7a0b380c603c375199e30fa9687f5756626e087d70ac54ab1a77a77a22be02 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009)
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.380 2 INFO nova.virt.libvirt.driver [-] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Instance destroyed successfully.#033[00m
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.381 2 DEBUG nova.objects.instance [None req-06aa65ef-1e83-47e3-b479-7c3defb3f940 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lazy-loading 'resources' on Instance uuid 0503a33e-dafe-4641-9e1d-f91a0a697468 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct 13 12:26:53 np0005485008 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5e7a0b380c603c375199e30fa9687f5756626e087d70ac54ab1a77a77a22be02-userdata-shm.mount: Deactivated successfully.
Oct 13 12:26:53 np0005485008 systemd[1]: var-lib-containers-storage-overlay-17f7e072c56cf2d4406d59b041ae1b48eacf18894fadafba4003f01fa10ec119-merged.mount: Deactivated successfully.
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.410 2 DEBUG nova.virt.libvirt.vif [None req-06aa65ef-1e83-47e3-b479-7c3defb3f940 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-13T16:24:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestExecuteZoneMigrationStrategy-server-2035283321',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testexecutezonemigrationstrategy-server-2035283321',id=37,image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-13T16:24:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8070a861003e4015ac392983e3444a1c',ramdisk_id='',reservation_id='r-wxh0xb45',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',clean_attempts='1',image_base_image_ref='dcd9fbd3-16ab-46e1-976e-0576b433c9d5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestExecuteZoneMigrationStrategy-555259936',owner_user_name='tempest-TestExecuteZoneMigrationStrategy-555259936-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-13T16:26:43Z,user_data=None,user_id='fa4b59a04fe44b478d878bb3964dfc67',uuid=0503a33e-dafe-4641-9e1d-f91a0a697468,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "666710ee-8031-4f27-8279-e526e6229929", "address": "fa:16:3e:5d:46:f9", "network": {"id": "6eec20f4-a93b-4c67-a33f-a03051c51d88", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-1854145690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d1e08aa8dd5f4ba4a4f76c59beb6b64f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap666710ee-80", "ovs_interfaceid": "666710ee-8031-4f27-8279-e526e6229929", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.411 2 DEBUG nova.network.os_vif_util [None req-06aa65ef-1e83-47e3-b479-7c3defb3f940 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Converting VIF {"id": "666710ee-8031-4f27-8279-e526e6229929", "address": "fa:16:3e:5d:46:f9", "network": {"id": "6eec20f4-a93b-4c67-a33f-a03051c51d88", "bridge": "br-int", "label": "tempest-TestExecuteZoneMigrationStrategy-1854145690-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d1e08aa8dd5f4ba4a4f76c59beb6b64f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap666710ee-80", "ovs_interfaceid": "666710ee-8031-4f27-8279-e526e6229929", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.411 2 DEBUG nova.network.os_vif_util [None req-06aa65ef-1e83-47e3-b479-7c3defb3f940 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:46:f9,bridge_name='br-int',has_traffic_filtering=True,id=666710ee-8031-4f27-8279-e526e6229929,network=Network(6eec20f4-a93b-4c67-a33f-a03051c51d88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap666710ee-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.412 2 DEBUG os_vif [None req-06aa65ef-1e83-47e3-b479-7c3defb3f940 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:46:f9,bridge_name='br-int',has_traffic_filtering=True,id=666710ee-8031-4f27-8279-e526e6229929,network=Network(6eec20f4-a93b-4c67-a33f-a03051c51d88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap666710ee-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.414 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap666710ee-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.418 2 INFO os_vif [None req-06aa65ef-1e83-47e3-b479-7c3defb3f940 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:46:f9,bridge_name='br-int',has_traffic_filtering=True,id=666710ee-8031-4f27-8279-e526e6229929,network=Network(6eec20f4-a93b-4c67-a33f-a03051c51d88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap666710ee-80')#033[00m
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.419 2 INFO nova.virt.libvirt.driver [None req-06aa65ef-1e83-47e3-b479-7c3defb3f940 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Deleting instance files /var/lib/nova/instances/0503a33e-dafe-4641-9e1d-f91a0a697468_del#033[00m
Oct 13 12:26:53 np0005485008 podman[230789]: 2025-10-13 16:26:53.4198422 +0000 UTC m=+0.089934449 container cleanup 5e7a0b380c603c375199e30fa9687f5756626e087d70ac54ab1a77a77a22be02 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.420 2 INFO nova.virt.libvirt.driver [None req-06aa65ef-1e83-47e3-b479-7c3defb3f940 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Deletion of /var/lib/nova/instances/0503a33e-dafe-4641-9e1d-f91a0a697468_del complete#033[00m
Oct 13 12:26:53 np0005485008 systemd[1]: libpod-conmon-5e7a0b380c603c375199e30fa9687f5756626e087d70ac54ab1a77a77a22be02.scope: Deactivated successfully.
Oct 13 12:26:53 np0005485008 podman[230834]: 2025-10-13 16:26:53.481343308 +0000 UTC m=+0.036808852 container remove 5e7a0b380c603c375199e30fa9687f5756626e087d70ac54ab1a77a77a22be02 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 12:26:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:53.487 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[1f6694cd-d31f-437c-bfeb-1944d592b0b2]: (4, ('Mon Oct 13 04:26:53 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88 (5e7a0b380c603c375199e30fa9687f5756626e087d70ac54ab1a77a77a22be02)\n5e7a0b380c603c375199e30fa9687f5756626e087d70ac54ab1a77a77a22be02\nMon Oct 13 04:26:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88 (5e7a0b380c603c375199e30fa9687f5756626e087d70ac54ab1a77a77a22be02)\n5e7a0b380c603c375199e30fa9687f5756626e087d70ac54ab1a77a77a22be02\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:53.489 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[70af45a8-c855-4fc9-ae6f-a982132b09e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:53.490 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6eec20f4-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:53 np0005485008 kernel: tap6eec20f4-a0: left promiscuous mode
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:53.508 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[37a3aa97-bf01-4085-b4d9-a49d71e4947c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:53.534 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[922ddd44-7787-4949-a60d-65fce11995fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:53.536 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[9ac3d464-2c41-4ff6-b317-64c45db8f725]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:53.551 214965 DEBUG oslo.privsep.daemon [-] privsep: reply[c6523246-1a06-41d8-93b8-41b701b249a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624722, 'reachable_time': 21045, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 230849, 'error': None, 'target': 'ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:53.554 103757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6eec20f4-a93b-4c67-a33f-a03051c51d88 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct 13 12:26:53 np0005485008 systemd[1]: run-netns-ovnmeta\x2d6eec20f4\x2da93b\x2d4c67\x2da33f\x2da03051c51d88.mount: Deactivated successfully.
Oct 13 12:26:53 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:26:53.554 103757 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e53f0e-4a36-4a41-bb83-c71a7d65f8df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.686 2 INFO nova.compute.manager [None req-06aa65ef-1e83-47e3-b479-7c3defb3f940 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Took 0.56 seconds to destroy the instance on the hypervisor.#033[00m
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.688 2 DEBUG oslo.service.loopingcall [None req-06aa65ef-1e83-47e3-b479-7c3defb3f940 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.689 2 DEBUG nova.compute.manager [-] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct 13 12:26:53 np0005485008 nova_compute[192512]: 2025-10-13 16:26:53.689 2 DEBUG nova.network.neutron [-] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct 13 12:26:56 np0005485008 nova_compute[192512]: 2025-10-13 16:26:56.359 2 DEBUG nova.compute.manager [req-7b118a14-54ce-46e3-b399-564171ea0105 req-a969a166-2184-4c1a-ab2c-3fd00c1353b8 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Received event network-vif-unplugged-666710ee-8031-4f27-8279-e526e6229929 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:26:56 np0005485008 nova_compute[192512]: 2025-10-13 16:26:56.360 2 DEBUG oslo_concurrency.lockutils [req-7b118a14-54ce-46e3-b399-564171ea0105 req-a969a166-2184-4c1a-ab2c-3fd00c1353b8 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "0503a33e-dafe-4641-9e1d-f91a0a697468-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:26:56 np0005485008 nova_compute[192512]: 2025-10-13 16:26:56.360 2 DEBUG oslo_concurrency.lockutils [req-7b118a14-54ce-46e3-b399-564171ea0105 req-a969a166-2184-4c1a-ab2c-3fd00c1353b8 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "0503a33e-dafe-4641-9e1d-f91a0a697468-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:26:56 np0005485008 nova_compute[192512]: 2025-10-13 16:26:56.360 2 DEBUG oslo_concurrency.lockutils [req-7b118a14-54ce-46e3-b399-564171ea0105 req-a969a166-2184-4c1a-ab2c-3fd00c1353b8 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "0503a33e-dafe-4641-9e1d-f91a0a697468-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:26:56 np0005485008 nova_compute[192512]: 2025-10-13 16:26:56.361 2 DEBUG nova.compute.manager [req-7b118a14-54ce-46e3-b399-564171ea0105 req-a969a166-2184-4c1a-ab2c-3fd00c1353b8 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] No waiting events found dispatching network-vif-unplugged-666710ee-8031-4f27-8279-e526e6229929 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:26:56 np0005485008 nova_compute[192512]: 2025-10-13 16:26:56.361 2 DEBUG nova.compute.manager [req-7b118a14-54ce-46e3-b399-564171ea0105 req-a969a166-2184-4c1a-ab2c-3fd00c1353b8 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Received event network-vif-unplugged-666710ee-8031-4f27-8279-e526e6229929 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct 13 12:26:56 np0005485008 nova_compute[192512]: 2025-10-13 16:26:56.527 2 DEBUG nova.network.neutron [-] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct 13 12:26:56 np0005485008 nova_compute[192512]: 2025-10-13 16:26:56.551 2 INFO nova.compute.manager [-] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Took 2.86 seconds to deallocate network for instance.#033[00m
Oct 13 12:26:56 np0005485008 nova_compute[192512]: 2025-10-13 16:26:56.609 2 DEBUG oslo_concurrency.lockutils [None req-06aa65ef-1e83-47e3-b479-7c3defb3f940 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:26:56 np0005485008 nova_compute[192512]: 2025-10-13 16:26:56.610 2 DEBUG oslo_concurrency.lockutils [None req-06aa65ef-1e83-47e3-b479-7c3defb3f940 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:26:56 np0005485008 nova_compute[192512]: 2025-10-13 16:26:56.676 2 DEBUG nova.compute.provider_tree [None req-06aa65ef-1e83-47e3-b479-7c3defb3f940 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:26:56 np0005485008 nova_compute[192512]: 2025-10-13 16:26:56.692 2 DEBUG nova.scheduler.client.report [None req-06aa65ef-1e83-47e3-b479-7c3defb3f940 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:26:56 np0005485008 nova_compute[192512]: 2025-10-13 16:26:56.716 2 DEBUG oslo_concurrency.lockutils [None req-06aa65ef-1e83-47e3-b479-7c3defb3f940 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:26:56 np0005485008 nova_compute[192512]: 2025-10-13 16:26:56.752 2 INFO nova.scheduler.client.report [None req-06aa65ef-1e83-47e3-b479-7c3defb3f940 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Deleted allocations for instance 0503a33e-dafe-4641-9e1d-f91a0a697468#033[00m
Oct 13 12:26:56 np0005485008 nova_compute[192512]: 2025-10-13 16:26:56.827 2 DEBUG oslo_concurrency.lockutils [None req-06aa65ef-1e83-47e3-b479-7c3defb3f940 fa4b59a04fe44b478d878bb3964dfc67 8070a861003e4015ac392983e3444a1c - - default default] Lock "0503a33e-dafe-4641-9e1d-f91a0a697468" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:26:58 np0005485008 nova_compute[192512]: 2025-10-13 16:26:58.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:58 np0005485008 nova_compute[192512]: 2025-10-13 16:26:58.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:26:58 np0005485008 nova_compute[192512]: 2025-10-13 16:26:58.677 2 DEBUG nova.compute.manager [req-7acec121-93be-4ef5-8045-434aa50e836a req-906c2e13-b435-4a50-8f15-b0823fade26e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Received event network-vif-deleted-666710ee-8031-4f27-8279-e526e6229929 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:26:58 np0005485008 nova_compute[192512]: 2025-10-13 16:26:58.678 2 DEBUG nova.compute.manager [req-7acec121-93be-4ef5-8045-434aa50e836a req-906c2e13-b435-4a50-8f15-b0823fade26e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Received event network-vif-plugged-666710ee-8031-4f27-8279-e526e6229929 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct 13 12:26:58 np0005485008 nova_compute[192512]: 2025-10-13 16:26:58.678 2 DEBUG oslo_concurrency.lockutils [req-7acec121-93be-4ef5-8045-434aa50e836a req-906c2e13-b435-4a50-8f15-b0823fade26e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Acquiring lock "0503a33e-dafe-4641-9e1d-f91a0a697468-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:26:58 np0005485008 nova_compute[192512]: 2025-10-13 16:26:58.679 2 DEBUG oslo_concurrency.lockutils [req-7acec121-93be-4ef5-8045-434aa50e836a req-906c2e13-b435-4a50-8f15-b0823fade26e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "0503a33e-dafe-4641-9e1d-f91a0a697468-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:26:58 np0005485008 nova_compute[192512]: 2025-10-13 16:26:58.679 2 DEBUG oslo_concurrency.lockutils [req-7acec121-93be-4ef5-8045-434aa50e836a req-906c2e13-b435-4a50-8f15-b0823fade26e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] Lock "0503a33e-dafe-4641-9e1d-f91a0a697468-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:26:58 np0005485008 nova_compute[192512]: 2025-10-13 16:26:58.679 2 DEBUG nova.compute.manager [req-7acec121-93be-4ef5-8045-434aa50e836a req-906c2e13-b435-4a50-8f15-b0823fade26e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] No waiting events found dispatching network-vif-plugged-666710ee-8031-4f27-8279-e526e6229929 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct 13 12:26:58 np0005485008 nova_compute[192512]: 2025-10-13 16:26:58.679 2 WARNING nova.compute.manager [req-7acec121-93be-4ef5-8045-434aa50e836a req-906c2e13-b435-4a50-8f15-b0823fade26e 2c824e2ebdf14b709fd6f7ed02ac1ef5 503944c6930e47acb3dde917828341d2 - - default default] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Received unexpected event network-vif-plugged-666710ee-8031-4f27-8279-e526e6229929 for instance with vm_state deleted and task_state None.#033[00m
Oct 13 12:27:00 np0005485008 podman[230850]: 2025-10-13 16:27:00.763425457 +0000 UTC m=+0.064488391 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.6, config_id=edpm, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, maintainer=Red Hat, Inc.)
Oct 13 12:27:03 np0005485008 nova_compute[192512]: 2025-10-13 16:27:03.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:27:03 np0005485008 nova_compute[192512]: 2025-10-13 16:27:03.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:27:04 np0005485008 nova_compute[192512]: 2025-10-13 16:27:04.951 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760372809.9498386, d43a5291-85ee-427c-b4b1-aa493ae09f02 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:27:04 np0005485008 nova_compute[192512]: 2025-10-13 16:27:04.952 2 INFO nova.compute.manager [-] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] VM Stopped (Lifecycle Event)#033[00m
Oct 13 12:27:05 np0005485008 nova_compute[192512]: 2025-10-13 16:27:05.010 2 DEBUG nova.compute.manager [None req-799c766a-7ae8-4543-ac67-bbca6b457482 - - - - - -] [instance: d43a5291-85ee-427c-b4b1-aa493ae09f02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:27:05 np0005485008 podman[202884]: time="2025-10-13T16:27:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:27:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:27:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:27:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:27:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3011 "" "Go-http-client/1.1"
Oct 13 12:27:08 np0005485008 nova_compute[192512]: 2025-10-13 16:27:08.380 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760372813.3793077, 0503a33e-dafe-4641-9e1d-f91a0a697468 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct 13 12:27:08 np0005485008 nova_compute[192512]: 2025-10-13 16:27:08.380 2 INFO nova.compute.manager [-] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] VM Stopped (Lifecycle Event)#033[00m
Oct 13 12:27:08 np0005485008 nova_compute[192512]: 2025-10-13 16:27:08.412 2 DEBUG nova.compute.manager [None req-be048b02-c7ef-4207-9472-e0ff87d57352 - - - - - -] [instance: 0503a33e-dafe-4641-9e1d-f91a0a697468] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct 13 12:27:08 np0005485008 nova_compute[192512]: 2025-10-13 16:27:08.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:27:08 np0005485008 nova_compute[192512]: 2025-10-13 16:27:08.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:27:13 np0005485008 nova_compute[192512]: 2025-10-13 16:27:13.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:27:13 np0005485008 nova_compute[192512]: 2025-10-13 16:27:13.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:27:17 np0005485008 podman[230876]: 2025-10-13 16:27:17.770743507 +0000 UTC m=+0.058655408 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 12:27:17 np0005485008 podman[230875]: 2025-10-13 16:27:17.779802398 +0000 UTC m=+0.072239189 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2)
Oct 13 12:27:17 np0005485008 podman[230873]: 2025-10-13 16:27:17.796690712 +0000 UTC m=+0.096393609 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=multipathd, org.label-schema.build-date=20251009, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 12:27:17 np0005485008 podman[230874]: 2025-10-13 16:27:17.806442965 +0000 UTC m=+0.103823009 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, config_id=iscsid, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 13 12:27:17 np0005485008 podman[230887]: 2025-10-13 16:27:17.812343017 +0000 UTC m=+0.090299459 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 13 12:27:18 np0005485008 nova_compute[192512]: 2025-10-13 16:27:18.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:27:18 np0005485008 nova_compute[192512]: 2025-10-13 16:27:18.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:27:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:27:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:27:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:27:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:27:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:27:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:27:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:27:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:27:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:27:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:27:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:27:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:27:23 np0005485008 nova_compute[192512]: 2025-10-13 16:27:23.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:27:23 np0005485008 nova_compute[192512]: 2025-10-13 16:27:23.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:27:26 np0005485008 ovn_controller[94758]: 2025-10-13T16:27:26Z|00336|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Oct 13 12:27:28 np0005485008 nova_compute[192512]: 2025-10-13 16:27:28.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:27:28 np0005485008 nova_compute[192512]: 2025-10-13 16:27:28.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:27:31 np0005485008 podman[230977]: 2025-10-13 16:27:31.786572389 +0000 UTC m=+0.086446022 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, release=1755695350, distribution-scope=public, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers)
Oct 13 12:27:33 np0005485008 nova_compute[192512]: 2025-10-13 16:27:33.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:27:33 np0005485008 nova_compute[192512]: 2025-10-13 16:27:33.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:27:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:27:33.997 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:27:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:27:33.998 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:27:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:27:33.998 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:27:35 np0005485008 podman[202884]: time="2025-10-13T16:27:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:27:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:27:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:27:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:27:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3015 "" "Go-http-client/1.1"
Oct 13 12:27:38 np0005485008 nova_compute[192512]: 2025-10-13 16:27:38.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:27:38 np0005485008 nova_compute[192512]: 2025-10-13 16:27:38.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:27:42 np0005485008 nova_compute[192512]: 2025-10-13 16:27:42.348 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:27:42 np0005485008 nova_compute[192512]: 2025-10-13 16:27:42.349 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:27:42 np0005485008 nova_compute[192512]: 2025-10-13 16:27:42.422 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:27:42 np0005485008 nova_compute[192512]: 2025-10-13 16:27:42.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:27:42 np0005485008 nova_compute[192512]: 2025-10-13 16:27:42.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:27:42 np0005485008 nova_compute[192512]: 2025-10-13 16:27:42.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:27:42 np0005485008 nova_compute[192512]: 2025-10-13 16:27:42.479 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:27:43 np0005485008 nova_compute[192512]: 2025-10-13 16:27:43.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:27:43 np0005485008 nova_compute[192512]: 2025-10-13 16:27:43.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:27:44 np0005485008 nova_compute[192512]: 2025-10-13 16:27:44.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:27:46 np0005485008 nova_compute[192512]: 2025-10-13 16:27:46.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:27:48 np0005485008 nova_compute[192512]: 2025-10-13 16:27:48.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:27:48 np0005485008 nova_compute[192512]: 2025-10-13 16:27:48.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:27:48 np0005485008 nova_compute[192512]: 2025-10-13 16:27:48.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:27:48 np0005485008 podman[231002]: 2025-10-13 16:27:48.767307802 +0000 UTC m=+0.063765528 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 12:27:48 np0005485008 podman[231003]: 2025-10-13 16:27:48.771650988 +0000 UTC m=+0.059440855 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 12:27:48 np0005485008 podman[231000]: 2025-10-13 16:27:48.778401096 +0000 UTC m=+0.079595439 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251009, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 12:27:48 np0005485008 podman[231001]: 2025-10-13 16:27:48.788605183 +0000 UTC m=+0.089919949 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, config_id=iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 13 12:27:48 np0005485008 podman[231013]: 2025-10-13 16:27:48.807876791 +0000 UTC m=+0.093603614 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0)
Oct 13 12:27:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:27:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:27:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:27:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:27:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:27:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:27:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:27:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:27:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:27:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:27:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:27:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:27:49 np0005485008 nova_compute[192512]: 2025-10-13 16:27:49.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:27:49 np0005485008 nova_compute[192512]: 2025-10-13 16:27:49.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:27:49 np0005485008 nova_compute[192512]: 2025-10-13 16:27:49.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:27:49 np0005485008 nova_compute[192512]: 2025-10-13 16:27:49.620 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:27:49 np0005485008 nova_compute[192512]: 2025-10-13 16:27:49.621 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:27:49 np0005485008 nova_compute[192512]: 2025-10-13 16:27:49.621 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:27:49 np0005485008 nova_compute[192512]: 2025-10-13 16:27:49.621 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:27:49 np0005485008 nova_compute[192512]: 2025-10-13 16:27:49.780 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:27:49 np0005485008 nova_compute[192512]: 2025-10-13 16:27:49.782 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5852MB free_disk=73.46296691894531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:27:49 np0005485008 nova_compute[192512]: 2025-10-13 16:27:49.782 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:27:49 np0005485008 nova_compute[192512]: 2025-10-13 16:27:49.782 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:27:49 np0005485008 nova_compute[192512]: 2025-10-13 16:27:49.915 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:27:49 np0005485008 nova_compute[192512]: 2025-10-13 16:27:49.915 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:27:49 np0005485008 nova_compute[192512]: 2025-10-13 16:27:49.938 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:27:50 np0005485008 nova_compute[192512]: 2025-10-13 16:27:50.053 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:27:50 np0005485008 nova_compute[192512]: 2025-10-13 16:27:50.244 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:27:50 np0005485008 nova_compute[192512]: 2025-10-13 16:27:50.245 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:27:53 np0005485008 nova_compute[192512]: 2025-10-13 16:27:53.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:27:53 np0005485008 nova_compute[192512]: 2025-10-13 16:27:53.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:27:58 np0005485008 nova_compute[192512]: 2025-10-13 16:27:58.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:27:58 np0005485008 nova_compute[192512]: 2025-10-13 16:27:58.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:00 np0005485008 nova_compute[192512]: 2025-10-13 16:28:00.241 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:28:02 np0005485008 podman[231101]: 2025-10-13 16:28:02.749495252 +0000 UTC m=+0.056476492 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, distribution-scope=public, config_id=edpm, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Oct 13 12:28:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:28:03.226 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:28:03 np0005485008 nova_compute[192512]: 2025-10-13 16:28:03.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:28:03.227 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 12:28:03 np0005485008 nova_compute[192512]: 2025-10-13 16:28:03.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:03 np0005485008 nova_compute[192512]: 2025-10-13 16:28:03.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:05 np0005485008 nova_compute[192512]: 2025-10-13 16:28:05.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:05 np0005485008 podman[202884]: time="2025-10-13T16:28:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:28:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:28:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:28:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:28:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3014 "" "Go-http-client/1.1"
Oct 13 12:28:08 np0005485008 nova_compute[192512]: 2025-10-13 16:28:08.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:08 np0005485008 nova_compute[192512]: 2025-10-13 16:28:08.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:11 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:28:11.228 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:28:13 np0005485008 nova_compute[192512]: 2025-10-13 16:28:13.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:13 np0005485008 nova_compute[192512]: 2025-10-13 16:28:13.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:18 np0005485008 nova_compute[192512]: 2025-10-13 16:28:18.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:18 np0005485008 nova_compute[192512]: 2025-10-13 16:28:18.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:28:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:28:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:28:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:28:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:28:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:28:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:28:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:28:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:28:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:28:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:28:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:28:19 np0005485008 podman[231122]: 2025-10-13 16:28:19.772161068 +0000 UTC m=+0.071455047 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 13 12:28:19 np0005485008 podman[231125]: 2025-10-13 16:28:19.782441616 +0000 UTC m=+0.069094992 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 12:28:19 np0005485008 podman[231124]: 2025-10-13 16:28:19.783183809 +0000 UTC m=+0.074686356 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 13 12:28:19 np0005485008 podman[231123]: 2025-10-13 16:28:19.802687874 +0000 UTC m=+0.097345869 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 12:28:19 np0005485008 podman[231131]: 2025-10-13 16:28:19.82739872 +0000 UTC m=+0.108676700 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 12:28:23 np0005485008 nova_compute[192512]: 2025-10-13 16:28:23.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:23 np0005485008 nova_compute[192512]: 2025-10-13 16:28:23.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:28 np0005485008 nova_compute[192512]: 2025-10-13 16:28:28.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:28 np0005485008 nova_compute[192512]: 2025-10-13 16:28:28.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:30 np0005485008 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 13 12:28:33 np0005485008 nova_compute[192512]: 2025-10-13 16:28:33.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:33 np0005485008 nova_compute[192512]: 2025-10-13 16:28:33.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:33 np0005485008 podman[231231]: 2025-10-13 16:28:33.770275209 +0000 UTC m=+0.073187109 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, release=1755695350, version=9.6, container_name=openstack_network_exporter, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Oct 13 12:28:33 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:28:33.998 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:28:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:28:33.999 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:28:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:28:33.999 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:28:35 np0005485008 podman[202884]: time="2025-10-13T16:28:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:28:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:28:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:28:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:28:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3012 "" "Go-http-client/1.1"
Oct 13 12:28:38 np0005485008 nova_compute[192512]: 2025-10-13 16:28:38.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:38 np0005485008 nova_compute[192512]: 2025-10-13 16:28:38.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:39 np0005485008 ovn_controller[94758]: 2025-10-13T16:28:39Z|00337|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Oct 13 12:28:42 np0005485008 nova_compute[192512]: 2025-10-13 16:28:42.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:28:42 np0005485008 nova_compute[192512]: 2025-10-13 16:28:42.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:28:43 np0005485008 nova_compute[192512]: 2025-10-13 16:28:43.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:28:43 np0005485008 nova_compute[192512]: 2025-10-13 16:28:43.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:43 np0005485008 nova_compute[192512]: 2025-10-13 16:28:43.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:44 np0005485008 nova_compute[192512]: 2025-10-13 16:28:44.426 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:28:44 np0005485008 nova_compute[192512]: 2025-10-13 16:28:44.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:28:44 np0005485008 nova_compute[192512]: 2025-10-13 16:28:44.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:28:44 np0005485008 nova_compute[192512]: 2025-10-13 16:28:44.465 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:28:45 np0005485008 nova_compute[192512]: 2025-10-13 16:28:45.426 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:28:46 np0005485008 nova_compute[192512]: 2025-10-13 16:28:46.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:28:48 np0005485008 nova_compute[192512]: 2025-10-13 16:28:48.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:48 np0005485008 nova_compute[192512]: 2025-10-13 16:28:48.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:28:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:28:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:28:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:28:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:28:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:28:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:28:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:28:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:28:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:28:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:28:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:28:49 np0005485008 nova_compute[192512]: 2025-10-13 16:28:49.426 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:28:49 np0005485008 nova_compute[192512]: 2025-10-13 16:28:49.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:28:49 np0005485008 nova_compute[192512]: 2025-10-13 16:28:49.522 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:28:49 np0005485008 nova_compute[192512]: 2025-10-13 16:28:49.523 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:28:49 np0005485008 nova_compute[192512]: 2025-10-13 16:28:49.523 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:28:49 np0005485008 nova_compute[192512]: 2025-10-13 16:28:49.524 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:28:49 np0005485008 nova_compute[192512]: 2025-10-13 16:28:49.706 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:28:49 np0005485008 nova_compute[192512]: 2025-10-13 16:28:49.707 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5876MB free_disk=73.46296691894531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:28:49 np0005485008 nova_compute[192512]: 2025-10-13 16:28:49.707 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:28:49 np0005485008 nova_compute[192512]: 2025-10-13 16:28:49.707 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:28:49 np0005485008 nova_compute[192512]: 2025-10-13 16:28:49.941 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:28:49 np0005485008 nova_compute[192512]: 2025-10-13 16:28:49.942 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:28:50 np0005485008 nova_compute[192512]: 2025-10-13 16:28:50.030 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing inventories for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 13 12:28:50 np0005485008 nova_compute[192512]: 2025-10-13 16:28:50.053 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating ProviderTree inventory for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 13 12:28:50 np0005485008 nova_compute[192512]: 2025-10-13 16:28:50.054 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating inventory in ProviderTree for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 13 12:28:50 np0005485008 nova_compute[192512]: 2025-10-13 16:28:50.070 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing aggregate associations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 13 12:28:50 np0005485008 nova_compute[192512]: 2025-10-13 16:28:50.090 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing trait associations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce, traits: HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,HW_CPU_X86_SSE42,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,HW_CPU_X86_BMI2,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 13 12:28:50 np0005485008 nova_compute[192512]: 2025-10-13 16:28:50.110 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:28:50 np0005485008 nova_compute[192512]: 2025-10-13 16:28:50.243 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:28:50 np0005485008 nova_compute[192512]: 2025-10-13 16:28:50.247 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:28:50 np0005485008 nova_compute[192512]: 2025-10-13 16:28:50.247 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:28:50 np0005485008 podman[231256]: 2025-10-13 16:28:50.769727414 +0000 UTC m=+0.065675207 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 13 12:28:50 np0005485008 podman[231262]: 2025-10-13 16:28:50.804541244 +0000 UTC m=+0.095947607 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 12:28:50 np0005485008 podman[231255]: 2025-10-13 16:28:50.804567385 +0000 UTC m=+0.100257820 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid)
Oct 13 12:28:50 np0005485008 podman[231254]: 2025-10-13 16:28:50.807165455 +0000 UTC m=+0.112595423 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 12:28:50 np0005485008 podman[231263]: 2025-10-13 16:28:50.819437746 +0000 UTC m=+0.102826780 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 12:28:51 np0005485008 nova_compute[192512]: 2025-10-13 16:28:51.248 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:28:51 np0005485008 nova_compute[192512]: 2025-10-13 16:28:51.248 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:28:53 np0005485008 nova_compute[192512]: 2025-10-13 16:28:53.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:53 np0005485008 nova_compute[192512]: 2025-10-13 16:28:53.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:56 np0005485008 systemd-logind[784]: New session 40 of user zuul.
Oct 13 12:28:56 np0005485008 systemd[1]: Started Session 40 of User zuul.
Oct 13 12:28:58 np0005485008 nova_compute[192512]: 2025-10-13 16:28:58.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:28:58 np0005485008 nova_compute[192512]: 2025-10-13 16:28:58.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:29:01 np0005485008 ovs-vsctl[231539]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 13 12:29:02 np0005485008 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 231391 (sos)
Oct 13 12:29:02 np0005485008 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct 13 12:29:02 np0005485008 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct 13 12:29:02 np0005485008 virtqemud[192082]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 13 12:29:02 np0005485008 virtqemud[192082]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 13 12:29:02 np0005485008 virtqemud[192082]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 13 12:29:03 np0005485008 kernel: block sr0: the capability attribute has been deprecated.
Oct 13 12:29:03 np0005485008 nova_compute[192512]: 2025-10-13 16:29:03.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:29:03 np0005485008 nova_compute[192512]: 2025-10-13 16:29:03.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:29:04 np0005485008 podman[232024]: 2025-10-13 16:29:04.773554095 +0000 UTC m=+0.066043819 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, release=1755695350, vcs-type=git, container_name=openstack_network_exporter, version=9.6, distribution-scope=public, name=ubi9-minimal, managed_by=edpm_ansible, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 13 12:29:05 np0005485008 podman[202884]: time="2025-10-13T16:29:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:29:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:29:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:29:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:29:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3015 "" "Go-http-client/1.1"
Oct 13 12:29:06 np0005485008 systemd[1]: Starting Hostname Service...
Oct 13 12:29:06 np0005485008 systemd[1]: Started Hostname Service.
Oct 13 12:29:08 np0005485008 nova_compute[192512]: 2025-10-13 16:29:08.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:29:08 np0005485008 nova_compute[192512]: 2025-10-13 16:29:08.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:29:12 np0005485008 ovs-appctl[233096]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 13 12:29:12 np0005485008 ovs-appctl[233110]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 13 12:29:12 np0005485008 ovs-appctl[233122]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 13 12:29:13 np0005485008 nova_compute[192512]: 2025-10-13 16:29:13.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:29:13 np0005485008 nova_compute[192512]: 2025-10-13 16:29:13.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:29:18 np0005485008 nova_compute[192512]: 2025-10-13 16:29:18.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:29:18 np0005485008 nova_compute[192512]: 2025-10-13 16:29:18.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:29:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:29:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:29:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:29:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:29:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:29:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:29:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:29:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:29:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:29:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:29:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:29:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:29:20 np0005485008 virtqemud[192082]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 13 12:29:20 np0005485008 podman[234409]: 2025-10-13 16:29:20.924573677 +0000 UTC m=+0.071506348 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid)
Oct 13 12:29:20 np0005485008 podman[234405]: 2025-10-13 16:29:20.938810798 +0000 UTC m=+0.096477463 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 12:29:20 np0005485008 podman[234398]: 2025-10-13 16:29:20.945121724 +0000 UTC m=+0.130752966 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 13 12:29:20 np0005485008 podman[234408]: 2025-10-13 16:29:20.971266315 +0000 UTC m=+0.125226385 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2)
Oct 13 12:29:20 np0005485008 podman[234416]: 2025-10-13 16:29:20.97660965 +0000 UTC m=+0.121518329 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller)
Oct 13 12:29:22 np0005485008 systemd[1]: Starting Time & Date Service...
Oct 13 12:29:22 np0005485008 systemd[1]: Started Time & Date Service.
Oct 13 12:29:23 np0005485008 nova_compute[192512]: 2025-10-13 16:29:23.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:29:23 np0005485008 nova_compute[192512]: 2025-10-13 16:29:23.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:29:28 np0005485008 nova_compute[192512]: 2025-10-13 16:29:28.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:29:28 np0005485008 nova_compute[192512]: 2025-10-13 16:29:28.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:29:33 np0005485008 nova_compute[192512]: 2025-10-13 16:29:33.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:29:33 np0005485008 nova_compute[192512]: 2025-10-13 16:29:33.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:29:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:29:34.000 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:29:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:29:34.000 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:29:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:29:34.000 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:29:35 np0005485008 podman[202884]: time="2025-10-13T16:29:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:29:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:29:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:29:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:29:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3014 "" "Go-http-client/1.1"
Oct 13 12:29:35 np0005485008 podman[234638]: 2025-10-13 16:29:35.777737299 +0000 UTC m=+0.080562970 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, distribution-scope=public, io.openshift.expose-services=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Oct 13 12:29:38 np0005485008 nova_compute[192512]: 2025-10-13 16:29:38.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:29:38 np0005485008 nova_compute[192512]: 2025-10-13 16:29:38.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:29:43 np0005485008 nova_compute[192512]: 2025-10-13 16:29:43.422 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:29:43 np0005485008 nova_compute[192512]: 2025-10-13 16:29:43.426 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:29:43 np0005485008 nova_compute[192512]: 2025-10-13 16:29:43.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:29:43 np0005485008 nova_compute[192512]: 2025-10-13 16:29:43.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:29:43 np0005485008 nova_compute[192512]: 2025-10-13 16:29:43.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:29:43 np0005485008 systemd[1]: session-40.scope: Deactivated successfully.
Oct 13 12:29:43 np0005485008 systemd[1]: session-40.scope: Consumed 1min 17.740s CPU time, 497.7M memory peak, read 106.4M from disk, written 20.2M to disk.
Oct 13 12:29:43 np0005485008 systemd-logind[784]: Session 40 logged out. Waiting for processes to exit.
Oct 13 12:29:43 np0005485008 systemd-logind[784]: Removed session 40.
Oct 13 12:29:43 np0005485008 systemd-logind[784]: New session 41 of user zuul.
Oct 13 12:29:43 np0005485008 systemd[1]: Started Session 41 of User zuul.
Oct 13 12:29:44 np0005485008 systemd[1]: session-41.scope: Deactivated successfully.
Oct 13 12:29:44 np0005485008 systemd-logind[784]: Session 41 logged out. Waiting for processes to exit.
Oct 13 12:29:44 np0005485008 systemd-logind[784]: Removed session 41.
Oct 13 12:29:44 np0005485008 systemd-logind[784]: New session 42 of user zuul.
Oct 13 12:29:44 np0005485008 systemd[1]: Started Session 42 of User zuul.
Oct 13 12:29:44 np0005485008 nova_compute[192512]: 2025-10-13 16:29:44.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:29:44 np0005485008 nova_compute[192512]: 2025-10-13 16:29:44.430 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:29:44 np0005485008 nova_compute[192512]: 2025-10-13 16:29:44.430 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:29:44 np0005485008 systemd-logind[784]: Session 42 logged out. Waiting for processes to exit.
Oct 13 12:29:44 np0005485008 systemd[1]: session-42.scope: Deactivated successfully.
Oct 13 12:29:44 np0005485008 systemd-logind[784]: Removed session 42.
Oct 13 12:29:44 np0005485008 nova_compute[192512]: 2025-10-13 16:29:44.456 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:29:47 np0005485008 nova_compute[192512]: 2025-10-13 16:29:47.426 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:29:48 np0005485008 nova_compute[192512]: 2025-10-13 16:29:48.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:29:48 np0005485008 nova_compute[192512]: 2025-10-13 16:29:48.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:29:48 np0005485008 nova_compute[192512]: 2025-10-13 16:29:48.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:29:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:29:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:29:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:29:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:29:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:29:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:29:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:29:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:29:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:29:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:29:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:29:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:29:51 np0005485008 nova_compute[192512]: 2025-10-13 16:29:51.426 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:29:51 np0005485008 nova_compute[192512]: 2025-10-13 16:29:51.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:29:51 np0005485008 nova_compute[192512]: 2025-10-13 16:29:51.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:29:51 np0005485008 nova_compute[192512]: 2025-10-13 16:29:51.467 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:29:51 np0005485008 nova_compute[192512]: 2025-10-13 16:29:51.468 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:29:51 np0005485008 nova_compute[192512]: 2025-10-13 16:29:51.468 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:29:51 np0005485008 nova_compute[192512]: 2025-10-13 16:29:51.468 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:29:51 np0005485008 nova_compute[192512]: 2025-10-13 16:29:51.674 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:29:51 np0005485008 nova_compute[192512]: 2025-10-13 16:29:51.675 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5667MB free_disk=73.46246337890625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:29:51 np0005485008 nova_compute[192512]: 2025-10-13 16:29:51.675 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:29:51 np0005485008 nova_compute[192512]: 2025-10-13 16:29:51.676 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:29:51 np0005485008 nova_compute[192512]: 2025-10-13 16:29:51.754 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:29:51 np0005485008 nova_compute[192512]: 2025-10-13 16:29:51.755 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:29:51 np0005485008 podman[234721]: 2025-10-13 16:29:51.793136655 +0000 UTC m=+0.083057797 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 13 12:29:51 np0005485008 podman[234720]: 2025-10-13 16:29:51.802556187 +0000 UTC m=+0.089611250 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 13 12:29:51 np0005485008 nova_compute[192512]: 2025-10-13 16:29:51.804 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:29:51 np0005485008 podman[234722]: 2025-10-13 16:29:51.818382288 +0000 UTC m=+0.106431602 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 13 12:29:51 np0005485008 podman[234723]: 2025-10-13 16:29:51.822808344 +0000 UTC m=+0.094770250 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 12:29:51 np0005485008 nova_compute[192512]: 2025-10-13 16:29:51.828 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:29:51 np0005485008 nova_compute[192512]: 2025-10-13 16:29:51.830 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:29:51 np0005485008 nova_compute[192512]: 2025-10-13 16:29:51.830 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:29:51 np0005485008 podman[234733]: 2025-10-13 16:29:51.83232181 +0000 UTC m=+0.102545931 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 12:29:52 np0005485008 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 13 12:29:52 np0005485008 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 13 12:29:52 np0005485008 nova_compute[192512]: 2025-10-13 16:29:52.831 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:29:53 np0005485008 nova_compute[192512]: 2025-10-13 16:29:53.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:29:53 np0005485008 nova_compute[192512]: 2025-10-13 16:29:53.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:29:58 np0005485008 nova_compute[192512]: 2025-10-13 16:29:58.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:29:58 np0005485008 nova_compute[192512]: 2025-10-13 16:29:58.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:30:02 np0005485008 nova_compute[192512]: 2025-10-13 16:30:02.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:30:03 np0005485008 nova_compute[192512]: 2025-10-13 16:30:03.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:30:03 np0005485008 nova_compute[192512]: 2025-10-13 16:30:03.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:30:05 np0005485008 podman[202884]: time="2025-10-13T16:30:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:30:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:30:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:30:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:30:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3015 "" "Go-http-client/1.1"
Oct 13 12:30:06 np0005485008 podman[234829]: 2025-10-13 16:30:06.786903785 +0000 UTC m=+0.077809954 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., release=1755695350, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Oct 13 12:30:08 np0005485008 nova_compute[192512]: 2025-10-13 16:30:08.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:30:08 np0005485008 nova_compute[192512]: 2025-10-13 16:30:08.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:30:13 np0005485008 nova_compute[192512]: 2025-10-13 16:30:13.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:30:13 np0005485008 nova_compute[192512]: 2025-10-13 16:30:13.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:30:16 np0005485008 systemd[1]: Starting dnf makecache...
Oct 13 12:30:16 np0005485008 dnf[234852]: Metadata cache refreshed recently.
Oct 13 12:30:16 np0005485008 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 13 12:30:16 np0005485008 systemd[1]: Finished dnf makecache.
Oct 13 12:30:18 np0005485008 nova_compute[192512]: 2025-10-13 16:30:18.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:30:18 np0005485008 nova_compute[192512]: 2025-10-13 16:30:18.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:30:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:30:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:30:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:30:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:30:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:30:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:30:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:30:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:30:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:30:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:30:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:30:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:30:22 np0005485008 podman[234853]: 2025-10-13 16:30:22.797786802 +0000 UTC m=+0.102770758 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 13 12:30:22 np0005485008 podman[234854]: 2025-10-13 16:30:22.798076481 +0000 UTC m=+0.099671522 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3)
Oct 13 12:30:22 np0005485008 podman[234855]: 2025-10-13 16:30:22.800399903 +0000 UTC m=+0.086177404 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct 13 12:30:22 np0005485008 podman[234856]: 2025-10-13 16:30:22.824441969 +0000 UTC m=+0.108642241 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 12:30:22 np0005485008 podman[234862]: 2025-10-13 16:30:22.824615655 +0000 UTC m=+0.114773871 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009)
Oct 13 12:30:23 np0005485008 nova_compute[192512]: 2025-10-13 16:30:23.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:30:23 np0005485008 nova_compute[192512]: 2025-10-13 16:30:23.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:30:28 np0005485008 nova_compute[192512]: 2025-10-13 16:30:28.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:30:28 np0005485008 nova_compute[192512]: 2025-10-13 16:30:28.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:30:33 np0005485008 nova_compute[192512]: 2025-10-13 16:30:33.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:30:33 np0005485008 nova_compute[192512]: 2025-10-13 16:30:33.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:30:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:30:34.001 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:30:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:30:34.002 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:30:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:30:34.002 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:30:35 np0005485008 podman[202884]: time="2025-10-13T16:30:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:30:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:30:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:30:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:30:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3017 "" "Go-http-client/1.1"
Oct 13 12:30:37 np0005485008 podman[234953]: 2025-10-13 16:30:37.758267822 +0000 UTC m=+0.055733699 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, managed_by=edpm_ansible, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.openshift.tags=minimal rhel9)
Oct 13 12:30:38 np0005485008 nova_compute[192512]: 2025-10-13 16:30:38.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:30:38 np0005485008 nova_compute[192512]: 2025-10-13 16:30:38.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:30:43 np0005485008 nova_compute[192512]: 2025-10-13 16:30:43.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:30:43 np0005485008 nova_compute[192512]: 2025-10-13 16:30:43.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:30:43 np0005485008 nova_compute[192512]: 2025-10-13 16:30:43.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:30:43 np0005485008 nova_compute[192512]: 2025-10-13 16:30:43.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:30:43 np0005485008 nova_compute[192512]: 2025-10-13 16:30:43.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:30:46 np0005485008 nova_compute[192512]: 2025-10-13 16:30:46.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:30:46 np0005485008 nova_compute[192512]: 2025-10-13 16:30:46.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:30:46 np0005485008 nova_compute[192512]: 2025-10-13 16:30:46.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:30:46 np0005485008 nova_compute[192512]: 2025-10-13 16:30:46.451 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:30:48 np0005485008 nova_compute[192512]: 2025-10-13 16:30:48.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:30:48 np0005485008 nova_compute[192512]: 2025-10-13 16:30:48.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:30:48 np0005485008 nova_compute[192512]: 2025-10-13 16:30:48.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:30:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:30:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:30:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:30:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:30:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:30:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:30:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:30:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:30:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:30:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:30:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:30:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:30:49 np0005485008 nova_compute[192512]: 2025-10-13 16:30:49.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:30:51 np0005485008 nova_compute[192512]: 2025-10-13 16:30:51.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:30:52 np0005485008 nova_compute[192512]: 2025-10-13 16:30:52.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:30:52 np0005485008 nova_compute[192512]: 2025-10-13 16:30:52.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:30:53 np0005485008 nova_compute[192512]: 2025-10-13 16:30:53.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:30:53 np0005485008 nova_compute[192512]: 2025-10-13 16:30:53.460 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:30:53 np0005485008 nova_compute[192512]: 2025-10-13 16:30:53.461 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:30:53 np0005485008 nova_compute[192512]: 2025-10-13 16:30:53.462 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:30:53 np0005485008 nova_compute[192512]: 2025-10-13 16:30:53.462 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:30:53 np0005485008 nova_compute[192512]: 2025-10-13 16:30:53.626 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:30:53 np0005485008 nova_compute[192512]: 2025-10-13 16:30:53.627 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5774MB free_disk=73.46288299560547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:30:53 np0005485008 nova_compute[192512]: 2025-10-13 16:30:53.627 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:30:53 np0005485008 nova_compute[192512]: 2025-10-13 16:30:53.628 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:30:53 np0005485008 nova_compute[192512]: 2025-10-13 16:30:53.702 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:30:53 np0005485008 nova_compute[192512]: 2025-10-13 16:30:53.702 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:30:53 np0005485008 nova_compute[192512]: 2025-10-13 16:30:53.733 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:30:53 np0005485008 nova_compute[192512]: 2025-10-13 16:30:53.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:30:53 np0005485008 nova_compute[192512]: 2025-10-13 16:30:53.763 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:30:53 np0005485008 nova_compute[192512]: 2025-10-13 16:30:53.766 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:30:53 np0005485008 nova_compute[192512]: 2025-10-13 16:30:53.766 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:30:53 np0005485008 nova_compute[192512]: 2025-10-13 16:30:53.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:30:53 np0005485008 nova_compute[192512]: 2025-10-13 16:30:53.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5027 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 13 12:30:53 np0005485008 nova_compute[192512]: 2025-10-13 16:30:53.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:30:53 np0005485008 nova_compute[192512]: 2025-10-13 16:30:53.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:30:53 np0005485008 nova_compute[192512]: 2025-10-13 16:30:53.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:30:53 np0005485008 podman[234977]: 2025-10-13 16:30:53.781752371 +0000 UTC m=+0.088035632 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 13 12:30:53 np0005485008 podman[234985]: 2025-10-13 16:30:53.791286016 +0000 UTC m=+0.075991368 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 12:30:53 np0005485008 podman[234979]: 2025-10-13 16:30:53.807718175 +0000 UTC m=+0.103278314 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 12:30:53 np0005485008 podman[234978]: 2025-10-13 16:30:53.812725901 +0000 UTC m=+0.114626636 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 12:30:53 np0005485008 podman[234991]: 2025-10-13 16:30:53.847183069 +0000 UTC m=+0.132111687 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 13 12:30:58 np0005485008 nova_compute[192512]: 2025-10-13 16:30:58.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:30:58 np0005485008 nova_compute[192512]: 2025-10-13 16:30:58.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:31:03 np0005485008 nova_compute[192512]: 2025-10-13 16:31:03.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:31:03 np0005485008 nova_compute[192512]: 2025-10-13 16:31:03.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:31:05 np0005485008 podman[202884]: time="2025-10-13T16:31:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:31:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:31:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:31:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:31:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3014 "" "Go-http-client/1.1"
Oct 13 12:31:08 np0005485008 podman[235076]: 2025-10-13 16:31:08.748493115 +0000 UTC m=+0.056465903 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, version=9.6, config_id=edpm, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter)
Oct 13 12:31:08 np0005485008 nova_compute[192512]: 2025-10-13 16:31:08.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:31:08 np0005485008 nova_compute[192512]: 2025-10-13 16:31:08.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:31:13 np0005485008 nova_compute[192512]: 2025-10-13 16:31:13.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:31:13 np0005485008 nova_compute[192512]: 2025-10-13 16:31:13.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:31:13 np0005485008 nova_compute[192512]: 2025-10-13 16:31:13.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:31:13 np0005485008 nova_compute[192512]: 2025-10-13 16:31:13.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 13 12:31:13 np0005485008 nova_compute[192512]: 2025-10-13 16:31:13.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:31:13 np0005485008 nova_compute[192512]: 2025-10-13 16:31:13.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:31:13 np0005485008 nova_compute[192512]: 2025-10-13 16:31:13.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:31:18 np0005485008 nova_compute[192512]: 2025-10-13 16:31:18.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:31:18 np0005485008 nova_compute[192512]: 2025-10-13 16:31:18.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:31:18 np0005485008 nova_compute[192512]: 2025-10-13 16:31:18.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 13 12:31:18 np0005485008 nova_compute[192512]: 2025-10-13 16:31:18.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:31:18 np0005485008 nova_compute[192512]: 2025-10-13 16:31:18.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:31:18 np0005485008 nova_compute[192512]: 2025-10-13 16:31:18.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:31:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:31:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:31:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:31:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:31:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:31:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:31:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:31:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:31:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:31:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:31:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:31:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:31:23 np0005485008 nova_compute[192512]: 2025-10-13 16:31:23.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:31:24 np0005485008 podman[235101]: 2025-10-13 16:31:24.774300803 +0000 UTC m=+0.069563038 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 12:31:24 np0005485008 podman[235103]: 2025-10-13 16:31:24.795031526 +0000 UTC m=+0.069189497 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 13 12:31:24 np0005485008 podman[235102]: 2025-10-13 16:31:24.808405341 +0000 UTC m=+0.095993569 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 13 12:31:24 np0005485008 podman[235104]: 2025-10-13 16:31:24.812650393 +0000 UTC m=+0.081667574 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 12:31:24 np0005485008 podman[235110]: 2025-10-13 16:31:24.883668545 +0000 UTC m=+0.148613670 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 12:31:28 np0005485008 nova_compute[192512]: 2025-10-13 16:31:28.440 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:31:28 np0005485008 nova_compute[192512]: 2025-10-13 16:31:28.441 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct 13 12:31:28 np0005485008 nova_compute[192512]: 2025-10-13 16:31:28.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:31:29 np0005485008 nova_compute[192512]: 2025-10-13 16:31:29.446 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:31:29 np0005485008 nova_compute[192512]: 2025-10-13 16:31:29.447 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct 13 12:31:29 np0005485008 nova_compute[192512]: 2025-10-13 16:31:29.466 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct 13 12:31:33 np0005485008 nova_compute[192512]: 2025-10-13 16:31:33.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:31:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:31:34.002 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:31:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:31:34.003 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:31:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:31:34.003 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:31:35 np0005485008 podman[202884]: time="2025-10-13T16:31:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:31:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:31:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:31:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:31:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3014 "" "Go-http-client/1.1"
Oct 13 12:31:38 np0005485008 nova_compute[192512]: 2025-10-13 16:31:38.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:31:39 np0005485008 podman[235205]: 2025-10-13 16:31:39.752440019 +0000 UTC m=+0.055688828 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., config_id=edpm, com.redhat.component=ubi9-minimal-container, architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.openshift.expose-services=)
Oct 13 12:31:43 np0005485008 nova_compute[192512]: 2025-10-13 16:31:43.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:31:45 np0005485008 nova_compute[192512]: 2025-10-13 16:31:45.443 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:31:45 np0005485008 nova_compute[192512]: 2025-10-13 16:31:45.444 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:31:45 np0005485008 nova_compute[192512]: 2025-10-13 16:31:45.444 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:31:47 np0005485008 nova_compute[192512]: 2025-10-13 16:31:47.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:31:47 np0005485008 nova_compute[192512]: 2025-10-13 16:31:47.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:31:47 np0005485008 nova_compute[192512]: 2025-10-13 16:31:47.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:31:47 np0005485008 nova_compute[192512]: 2025-10-13 16:31:47.457 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:31:48 np0005485008 nova_compute[192512]: 2025-10-13 16:31:48.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:31:48 np0005485008 nova_compute[192512]: 2025-10-13 16:31:48.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:31:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:31:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:31:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:31:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:31:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:31:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:31:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:31:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:31:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:31:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:31:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:31:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:31:49 np0005485008 nova_compute[192512]: 2025-10-13 16:31:49.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:31:51 np0005485008 nova_compute[192512]: 2025-10-13 16:31:51.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:31:53 np0005485008 nova_compute[192512]: 2025-10-13 16:31:53.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:31:53 np0005485008 nova_compute[192512]: 2025-10-13 16:31:53.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:31:53 np0005485008 nova_compute[192512]: 2025-10-13 16:31:53.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:31:53 np0005485008 nova_compute[192512]: 2025-10-13 16:31:53.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 13 12:31:53 np0005485008 nova_compute[192512]: 2025-10-13 16:31:53.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:31:53 np0005485008 nova_compute[192512]: 2025-10-13 16:31:53.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:31:53 np0005485008 nova_compute[192512]: 2025-10-13 16:31:53.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:31:54 np0005485008 nova_compute[192512]: 2025-10-13 16:31:54.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:31:54 np0005485008 nova_compute[192512]: 2025-10-13 16:31:54.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:31:54 np0005485008 nova_compute[192512]: 2025-10-13 16:31:54.473 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:31:54 np0005485008 nova_compute[192512]: 2025-10-13 16:31:54.475 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:31:54 np0005485008 nova_compute[192512]: 2025-10-13 16:31:54.475 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:31:54 np0005485008 nova_compute[192512]: 2025-10-13 16:31:54.476 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:31:54 np0005485008 nova_compute[192512]: 2025-10-13 16:31:54.644 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:31:54 np0005485008 nova_compute[192512]: 2025-10-13 16:31:54.645 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5788MB free_disk=73.4629020690918GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:31:54 np0005485008 nova_compute[192512]: 2025-10-13 16:31:54.646 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:31:54 np0005485008 nova_compute[192512]: 2025-10-13 16:31:54.646 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:31:54 np0005485008 nova_compute[192512]: 2025-10-13 16:31:54.778 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:31:54 np0005485008 nova_compute[192512]: 2025-10-13 16:31:54.779 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:31:54 np0005485008 nova_compute[192512]: 2025-10-13 16:31:54.802 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:31:54 np0005485008 nova_compute[192512]: 2025-10-13 16:31:54.832 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:31:54 np0005485008 nova_compute[192512]: 2025-10-13 16:31:54.834 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:31:54 np0005485008 nova_compute[192512]: 2025-10-13 16:31:54.834 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:31:55 np0005485008 podman[235226]: 2025-10-13 16:31:55.777452254 +0000 UTC m=+0.080869578 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct 13 12:31:55 np0005485008 podman[235228]: 2025-10-13 16:31:55.795336119 +0000 UTC m=+0.088334731 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_metadata_agent, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 13 12:31:55 np0005485008 podman[235229]: 2025-10-13 16:31:55.795883985 +0000 UTC m=+0.085635426 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 12:31:55 np0005485008 podman[235227]: 2025-10-13 16:31:55.811089417 +0000 UTC m=+0.109842047 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 12:31:55 np0005485008 podman[235235]: 2025-10-13 16:31:55.850419587 +0000 UTC m=+0.137231617 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251009)
Oct 13 12:31:58 np0005485008 nova_compute[192512]: 2025-10-13 16:31:58.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:31:58 np0005485008 nova_compute[192512]: 2025-10-13 16:31:58.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:32:03 np0005485008 nova_compute[192512]: 2025-10-13 16:32:03.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:32:03 np0005485008 nova_compute[192512]: 2025-10-13 16:32:03.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:32:05 np0005485008 podman[202884]: time="2025-10-13T16:32:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:32:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:32:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:32:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:32:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3018 "" "Go-http-client/1.1"
Oct 13 12:32:06 np0005485008 nova_compute[192512]: 2025-10-13 16:32:06.830 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:32:08 np0005485008 nova_compute[192512]: 2025-10-13 16:32:08.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:32:10 np0005485008 podman[235329]: 2025-10-13 16:32:10.743191067 +0000 UTC m=+0.048452293 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, release=1755695350, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Oct 13 12:32:13 np0005485008 nova_compute[192512]: 2025-10-13 16:32:13.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:32:13 np0005485008 nova_compute[192512]: 2025-10-13 16:32:13.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:32:13 np0005485008 nova_compute[192512]: 2025-10-13 16:32:13.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 13 12:32:13 np0005485008 nova_compute[192512]: 2025-10-13 16:32:13.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:32:13 np0005485008 nova_compute[192512]: 2025-10-13 16:32:13.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:32:13 np0005485008 nova_compute[192512]: 2025-10-13 16:32:13.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:32:18 np0005485008 nova_compute[192512]: 2025-10-13 16:32:18.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:32:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:32:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:32:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:32:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:32:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:32:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:32:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:32:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:32:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:32:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:32:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:32:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:32:23 np0005485008 nova_compute[192512]: 2025-10-13 16:32:23.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:32:26 np0005485008 podman[235355]: 2025-10-13 16:32:26.776446118 +0000 UTC m=+0.062698315 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 12:32:26 np0005485008 podman[235353]: 2025-10-13 16:32:26.775914521 +0000 UTC m=+0.072061096 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 13 12:32:26 np0005485008 podman[235354]: 2025-10-13 16:32:26.794711515 +0000 UTC m=+0.075681848 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 12:32:26 np0005485008 podman[235352]: 2025-10-13 16:32:26.80229938 +0000 UTC m=+0.098184006 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 13 12:32:26 np0005485008 podman[235363]: 2025-10-13 16:32:26.836718987 +0000 UTC m=+0.104366338 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 12:32:28 np0005485008 nova_compute[192512]: 2025-10-13 16:32:28.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:32:33 np0005485008 nova_compute[192512]: 2025-10-13 16:32:33.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:32:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:32:34.003 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:32:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:32:34.004 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:32:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:32:34.004 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:32:35 np0005485008 podman[202884]: time="2025-10-13T16:32:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:32:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:32:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:32:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:32:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3016 "" "Go-http-client/1.1"
Oct 13 12:32:38 np0005485008 nova_compute[192512]: 2025-10-13 16:32:38.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:32:38 np0005485008 nova_compute[192512]: 2025-10-13 16:32:38.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:32:41 np0005485008 podman[235454]: 2025-10-13 16:32:41.779585569 +0000 UTC m=+0.085067419 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_id=edpm, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.openshift.expose-services=, name=ubi9-minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., release=1755695350)
Oct 13 12:32:43 np0005485008 nova_compute[192512]: 2025-10-13 16:32:43.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:32:43 np0005485008 nova_compute[192512]: 2025-10-13 16:32:43.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:32:43 np0005485008 nova_compute[192512]: 2025-10-13 16:32:43.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5030 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 13 12:32:43 np0005485008 nova_compute[192512]: 2025-10-13 16:32:43.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:32:43 np0005485008 nova_compute[192512]: 2025-10-13 16:32:43.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:32:43 np0005485008 nova_compute[192512]: 2025-10-13 16:32:43.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:32:45 np0005485008 nova_compute[192512]: 2025-10-13 16:32:45.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:32:45 np0005485008 nova_compute[192512]: 2025-10-13 16:32:45.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:32:47 np0005485008 nova_compute[192512]: 2025-10-13 16:32:47.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:32:47 np0005485008 nova_compute[192512]: 2025-10-13 16:32:47.426 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:32:47 np0005485008 nova_compute[192512]: 2025-10-13 16:32:47.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:32:47 np0005485008 nova_compute[192512]: 2025-10-13 16:32:47.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:32:47 np0005485008 nova_compute[192512]: 2025-10-13 16:32:47.518 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:32:48 np0005485008 nova_compute[192512]: 2025-10-13 16:32:48.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:32:48 np0005485008 nova_compute[192512]: 2025-10-13 16:32:48.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:32:48 np0005485008 nova_compute[192512]: 2025-10-13 16:32:48.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:32:48 np0005485008 nova_compute[192512]: 2025-10-13 16:32:48.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 13 12:32:48 np0005485008 nova_compute[192512]: 2025-10-13 16:32:48.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:32:48 np0005485008 nova_compute[192512]: 2025-10-13 16:32:48.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:32:48 np0005485008 nova_compute[192512]: 2025-10-13 16:32:48.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:32:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:32:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:32:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:32:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:32:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:32:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:32:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:32:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:32:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:32:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:32:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:32:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:32:50 np0005485008 nova_compute[192512]: 2025-10-13 16:32:50.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:32:51 np0005485008 nova_compute[192512]: 2025-10-13 16:32:51.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:32:53 np0005485008 nova_compute[192512]: 2025-10-13 16:32:53.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:32:53 np0005485008 nova_compute[192512]: 2025-10-13 16:32:53.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:32:53 np0005485008 nova_compute[192512]: 2025-10-13 16:32:53.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:32:53 np0005485008 nova_compute[192512]: 2025-10-13 16:32:53.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 13 12:32:53 np0005485008 nova_compute[192512]: 2025-10-13 16:32:53.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:32:53 np0005485008 nova_compute[192512]: 2025-10-13 16:32:53.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:32:53 np0005485008 nova_compute[192512]: 2025-10-13 16:32:53.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:32:55 np0005485008 nova_compute[192512]: 2025-10-13 16:32:55.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:32:55 np0005485008 nova_compute[192512]: 2025-10-13 16:32:55.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:32:55 np0005485008 nova_compute[192512]: 2025-10-13 16:32:55.470 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:32:55 np0005485008 nova_compute[192512]: 2025-10-13 16:32:55.471 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:32:55 np0005485008 nova_compute[192512]: 2025-10-13 16:32:55.471 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:32:55 np0005485008 nova_compute[192512]: 2025-10-13 16:32:55.471 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:32:55 np0005485008 nova_compute[192512]: 2025-10-13 16:32:55.626 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:32:55 np0005485008 nova_compute[192512]: 2025-10-13 16:32:55.627 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5802MB free_disk=73.4624137878418GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:32:55 np0005485008 nova_compute[192512]: 2025-10-13 16:32:55.627 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:32:55 np0005485008 nova_compute[192512]: 2025-10-13 16:32:55.628 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:32:57 np0005485008 nova_compute[192512]: 2025-10-13 16:32:57.072 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:32:57 np0005485008 nova_compute[192512]: 2025-10-13 16:32:57.072 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:32:57 np0005485008 nova_compute[192512]: 2025-10-13 16:32:57.311 2 DEBUG oslo_concurrency.processutils [None req-038cc9c7-dce6-476b-86db-a63cff40968c 865607264bba43aa9610d9440c89e920 d93a2ce330a244f186b39e1ea3fc96a4 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct 13 12:32:57 np0005485008 nova_compute[192512]: 2025-10-13 16:32:57.332 2 DEBUG oslo_concurrency.processutils [None req-038cc9c7-dce6-476b-86db-a63cff40968c 865607264bba43aa9610d9440c89e920 d93a2ce330a244f186b39e1ea3fc96a4 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct 13 12:32:57 np0005485008 nova_compute[192512]: 2025-10-13 16:32:57.593 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:32:57 np0005485008 nova_compute[192512]: 2025-10-13 16:32:57.609 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:32:57 np0005485008 nova_compute[192512]: 2025-10-13 16:32:57.611 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:32:57 np0005485008 nova_compute[192512]: 2025-10-13 16:32:57.612 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.984s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:32:57 np0005485008 podman[235478]: 2025-10-13 16:32:57.762688185 +0000 UTC m=+0.057524225 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 12:32:57 np0005485008 podman[235479]: 2025-10-13 16:32:57.77832546 +0000 UTC m=+0.062521360 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Oct 13 12:32:57 np0005485008 podman[235477]: 2025-10-13 16:32:57.79027012 +0000 UTC m=+0.092374696 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 13 12:32:57 np0005485008 podman[235476]: 2025-10-13 16:32:57.793379437 +0000 UTC m=+0.093936484 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251009, tcib_managed=true)
Oct 13 12:32:57 np0005485008 podman[235480]: 2025-10-13 16:32:57.814368998 +0000 UTC m=+0.103672676 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 12:32:59 np0005485008 nova_compute[192512]: 2025-10-13 16:32:58.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:32:59 np0005485008 nova_compute[192512]: 2025-10-13 16:32:58.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:32:59 np0005485008 nova_compute[192512]: 2025-10-13 16:32:58.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 13 12:32:59 np0005485008 nova_compute[192512]: 2025-10-13 16:32:58.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:32:59 np0005485008 nova_compute[192512]: 2025-10-13 16:32:59.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:32:59 np0005485008 nova_compute[192512]: 2025-10-13 16:32:59.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:33:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:33:03.819 103642 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ca:c3:b4', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:fa:06:7c:6f:1a'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct 13 12:33:03 np0005485008 nova_compute[192512]: 2025-10-13 16:33:03.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:33:03 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:33:03.821 103642 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct 13 12:33:04 np0005485008 nova_compute[192512]: 2025-10-13 16:33:04.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:33:05 np0005485008 podman[202884]: time="2025-10-13T16:33:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:33:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:33:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:33:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:33:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3016 "" "Go-http-client/1.1"
Oct 13 12:33:09 np0005485008 nova_compute[192512]: 2025-10-13 16:33:09.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:33:09 np0005485008 nova_compute[192512]: 2025-10-13 16:33:09.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:33:09 np0005485008 nova_compute[192512]: 2025-10-13 16:33:09.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 13 12:33:09 np0005485008 nova_compute[192512]: 2025-10-13 16:33:09.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:33:09 np0005485008 nova_compute[192512]: 2025-10-13 16:33:09.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:33:09 np0005485008 nova_compute[192512]: 2025-10-13 16:33:09.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:33:12 np0005485008 podman[235573]: 2025-10-13 16:33:12.747340475 +0000 UTC m=+0.053188891 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, managed_by=edpm_ansible)
Oct 13 12:33:13 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:33:13.823 103642 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=e8c98390-b106-43ff-9736-5afcb5548264, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct 13 12:33:14 np0005485008 nova_compute[192512]: 2025-10-13 16:33:14.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:33:19 np0005485008 nova_compute[192512]: 2025-10-13 16:33:19.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:33:19 np0005485008 nova_compute[192512]: 2025-10-13 16:33:19.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:33:19 np0005485008 nova_compute[192512]: 2025-10-13 16:33:19.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 13 12:33:19 np0005485008 nova_compute[192512]: 2025-10-13 16:33:19.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:33:19 np0005485008 nova_compute[192512]: 2025-10-13 16:33:19.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:33:19 np0005485008 nova_compute[192512]: 2025-10-13 16:33:19.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:33:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:33:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:33:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:33:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:33:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:33:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:33:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:33:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:33:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:33:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:33:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:33:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:33:24 np0005485008 nova_compute[192512]: 2025-10-13 16:33:24.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:33:28 np0005485008 podman[235596]: 2025-10-13 16:33:28.773802544 +0000 UTC m=+0.068524326 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 12:33:28 np0005485008 podman[235595]: 2025-10-13 16:33:28.775802955 +0000 UTC m=+0.075731428 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251009, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 13 12:33:28 np0005485008 podman[235603]: 2025-10-13 16:33:28.776526518 +0000 UTC m=+0.062206000 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 12:33:28 np0005485008 podman[235597]: 2025-10-13 16:33:28.792199374 +0000 UTC m=+0.078450254 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251009)
Oct 13 12:33:28 np0005485008 podman[235609]: 2025-10-13 16:33:28.848947104 +0000 UTC m=+0.125287686 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true)
Oct 13 12:33:29 np0005485008 nova_compute[192512]: 2025-10-13 16:33:29.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:33:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:33:34.005 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:33:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:33:34.005 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:33:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:33:34.005 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:33:34 np0005485008 nova_compute[192512]: 2025-10-13 16:33:34.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:33:35 np0005485008 podman[202884]: time="2025-10-13T16:33:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:33:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:33:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:33:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:33:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3019 "" "Go-http-client/1.1"
Oct 13 12:33:39 np0005485008 nova_compute[192512]: 2025-10-13 16:33:39.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:33:43 np0005485008 podman[235695]: 2025-10-13 16:33:43.760134777 +0000 UTC m=+0.067562397 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc.)
Oct 13 12:33:44 np0005485008 nova_compute[192512]: 2025-10-13 16:33:44.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:33:48 np0005485008 nova_compute[192512]: 2025-10-13 16:33:48.613 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:33:48 np0005485008 nova_compute[192512]: 2025-10-13 16:33:48.614 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:33:48 np0005485008 nova_compute[192512]: 2025-10-13 16:33:48.614 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:33:48 np0005485008 nova_compute[192512]: 2025-10-13 16:33:48.614 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:33:48 np0005485008 nova_compute[192512]: 2025-10-13 16:33:48.636 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:33:48 np0005485008 nova_compute[192512]: 2025-10-13 16:33:48.636 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:33:48 np0005485008 nova_compute[192512]: 2025-10-13 16:33:48.636 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:33:48 np0005485008 nova_compute[192512]: 2025-10-13 16:33:48.636 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:33:49 np0005485008 nova_compute[192512]: 2025-10-13 16:33:49.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:33:49 np0005485008 nova_compute[192512]: 2025-10-13 16:33:49.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:33:49 np0005485008 nova_compute[192512]: 2025-10-13 16:33:49.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 13 12:33:49 np0005485008 nova_compute[192512]: 2025-10-13 16:33:49.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:33:49 np0005485008 nova_compute[192512]: 2025-10-13 16:33:49.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:33:49 np0005485008 nova_compute[192512]: 2025-10-13 16:33:49.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:33:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:33:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:33:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:33:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:33:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:33:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:33:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:33:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:33:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:33:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:33:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:33:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:33:52 np0005485008 nova_compute[192512]: 2025-10-13 16:33:52.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:33:52 np0005485008 nova_compute[192512]: 2025-10-13 16:33:52.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:33:53 np0005485008 nova_compute[192512]: 2025-10-13 16:33:53.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:33:54 np0005485008 nova_compute[192512]: 2025-10-13 16:33:54.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:33:57 np0005485008 nova_compute[192512]: 2025-10-13 16:33:57.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:33:57 np0005485008 nova_compute[192512]: 2025-10-13 16:33:57.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:33:58 np0005485008 nova_compute[192512]: 2025-10-13 16:33:58.232 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:33:58 np0005485008 nova_compute[192512]: 2025-10-13 16:33:58.233 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:33:58 np0005485008 nova_compute[192512]: 2025-10-13 16:33:58.233 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:33:58 np0005485008 nova_compute[192512]: 2025-10-13 16:33:58.233 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:33:58 np0005485008 nova_compute[192512]: 2025-10-13 16:33:58.465 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:33:58 np0005485008 nova_compute[192512]: 2025-10-13 16:33:58.466 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5808MB free_disk=73.46215438842773GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:33:58 np0005485008 nova_compute[192512]: 2025-10-13 16:33:58.466 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:33:58 np0005485008 nova_compute[192512]: 2025-10-13 16:33:58.467 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:33:58 np0005485008 nova_compute[192512]: 2025-10-13 16:33:58.563 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:33:58 np0005485008 nova_compute[192512]: 2025-10-13 16:33:58.564 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:33:58 np0005485008 nova_compute[192512]: 2025-10-13 16:33:58.687 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing inventories for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct 13 12:33:58 np0005485008 nova_compute[192512]: 2025-10-13 16:33:58.704 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating ProviderTree inventory for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct 13 12:33:58 np0005485008 nova_compute[192512]: 2025-10-13 16:33:58.705 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Updating inventory in ProviderTree for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct 13 12:33:58 np0005485008 nova_compute[192512]: 2025-10-13 16:33:58.722 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing aggregate associations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct 13 12:33:58 np0005485008 nova_compute[192512]: 2025-10-13 16:33:58.747 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Refreshing trait associations for resource provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce, traits: HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SVM,HW_CPU_X86_SSE42,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SHA,HW_CPU_X86_SSE2,HW_CPU_X86_MMX,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,HW_CPU_X86_BMI2,HW_CPU_X86_ABM,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct 13 12:33:58 np0005485008 nova_compute[192512]: 2025-10-13 16:33:58.783 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:33:58 np0005485008 nova_compute[192512]: 2025-10-13 16:33:58.805 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:33:58 np0005485008 nova_compute[192512]: 2025-10-13 16:33:58.808 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:33:58 np0005485008 nova_compute[192512]: 2025-10-13 16:33:58.810 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.343s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:33:59 np0005485008 nova_compute[192512]: 2025-10-13 16:33:59.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:33:59 np0005485008 nova_compute[192512]: 2025-10-13 16:33:59.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:33:59 np0005485008 nova_compute[192512]: 2025-10-13 16:33:59.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 13 12:33:59 np0005485008 nova_compute[192512]: 2025-10-13 16:33:59.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:33:59 np0005485008 nova_compute[192512]: 2025-10-13 16:33:59.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:33:59 np0005485008 nova_compute[192512]: 2025-10-13 16:33:59.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:33:59 np0005485008 podman[235746]: 2025-10-13 16:33:59.7855604 +0000 UTC m=+0.074945418 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 12:33:59 np0005485008 podman[235740]: 2025-10-13 16:33:59.79166205 +0000 UTC m=+0.076742193 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 13 12:33:59 np0005485008 podman[235738]: 2025-10-13 16:33:59.794434467 +0000 UTC m=+0.098383490 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 13 12:33:59 np0005485008 podman[235739]: 2025-10-13 16:33:59.796053338 +0000 UTC m=+0.089290366 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid)
Oct 13 12:33:59 np0005485008 podman[235747]: 2025-10-13 16:33:59.821474934 +0000 UTC m=+0.104638277 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 13 12:34:04 np0005485008 nova_compute[192512]: 2025-10-13 16:34:04.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:34:05 np0005485008 podman[202884]: time="2025-10-13T16:34:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:34:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:34:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:34:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:34:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3018 "" "Go-http-client/1.1"
Oct 13 12:34:09 np0005485008 nova_compute[192512]: 2025-10-13 16:34:09.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:34:11 np0005485008 nova_compute[192512]: 2025-10-13 16:34:11.807 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:34:14 np0005485008 nova_compute[192512]: 2025-10-13 16:34:14.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:34:14 np0005485008 podman[235844]: 2025-10-13 16:34:14.746399359 +0000 UTC m=+0.054622771 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, name=ubi9-minimal)
Oct 13 12:34:19 np0005485008 nova_compute[192512]: 2025-10-13 16:34:19.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:34:19 np0005485008 nova_compute[192512]: 2025-10-13 16:34:19.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:34:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:34:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:34:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:34:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:34:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:34:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:34:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:34:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:34:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:34:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:34:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:34:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:34:24 np0005485008 nova_compute[192512]: 2025-10-13 16:34:24.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:34:29 np0005485008 nova_compute[192512]: 2025-10-13 16:34:29.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:34:30 np0005485008 podman[235867]: 2025-10-13 16:34:30.783812067 +0000 UTC m=+0.062180448 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251009, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 13 12:34:30 np0005485008 podman[235865]: 2025-10-13 16:34:30.783807356 +0000 UTC m=+0.072448629 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=92672cd85fd36317d65faa0525acf849, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 13 12:34:30 np0005485008 podman[235866]: 2025-10-13 16:34:30.797621259 +0000 UTC m=+0.078273542 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible)
Oct 13 12:34:30 np0005485008 podman[235873]: 2025-10-13 16:34:30.80724213 +0000 UTC m=+0.079272303 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Oct 13 12:34:30 np0005485008 podman[235879]: 2025-10-13 16:34:30.833983467 +0000 UTC m=+0.097728080 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=92672cd85fd36317d65faa0525acf849, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251009, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 13 12:34:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:34:34.006 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:34:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:34:34.006 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:34:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:34:34.006 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:34:34 np0005485008 nova_compute[192512]: 2025-10-13 16:34:34.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:34:34 np0005485008 nova_compute[192512]: 2025-10-13 16:34:34.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:34:35 np0005485008 podman[202884]: time="2025-10-13T16:34:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:34:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:34:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:34:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:34:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3018 "" "Go-http-client/1.1"
Oct 13 12:34:39 np0005485008 nova_compute[192512]: 2025-10-13 16:34:39.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:34:39 np0005485008 nova_compute[192512]: 2025-10-13 16:34:39.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:34:44 np0005485008 nova_compute[192512]: 2025-10-13 16:34:44.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:34:44 np0005485008 nova_compute[192512]: 2025-10-13 16:34:44.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:34:44 np0005485008 nova_compute[192512]: 2025-10-13 16:34:44.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 13 12:34:44 np0005485008 nova_compute[192512]: 2025-10-13 16:34:44.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:34:44 np0005485008 nova_compute[192512]: 2025-10-13 16:34:44.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:34:44 np0005485008 nova_compute[192512]: 2025-10-13 16:34:44.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:34:45 np0005485008 podman[235976]: 2025-10-13 16:34:45.015767975 +0000 UTC m=+0.060563657 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, distribution-scope=public, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_id=edpm, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, managed_by=edpm_ansible, vcs-type=git, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350)
Oct 13 12:34:47 np0005485008 nova_compute[192512]: 2025-10-13 16:34:47.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:34:47 np0005485008 nova_compute[192512]: 2025-10-13 16:34:47.428 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:34:49 np0005485008 nova_compute[192512]: 2025-10-13 16:34:49.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:34:49 np0005485008 nova_compute[192512]: 2025-10-13 16:34:49.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:34:49 np0005485008 nova_compute[192512]: 2025-10-13 16:34:49.423 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:34:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:34:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:34:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:34:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:34:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:34:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:34:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:34:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:34:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:34:49 np0005485008 nova_compute[192512]: 2025-10-13 16:34:49.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:34:49 np0005485008 nova_compute[192512]: 2025-10-13 16:34:49.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:34:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:34:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:34:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:34:49 np0005485008 nova_compute[192512]: 2025-10-13 16:34:49.427 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:34:49 np0005485008 nova_compute[192512]: 2025-10-13 16:34:49.464 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:34:49 np0005485008 nova_compute[192512]: 2025-10-13 16:34:49.464 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:34:53 np0005485008 nova_compute[192512]: 2025-10-13 16:34:53.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:34:54 np0005485008 nova_compute[192512]: 2025-10-13 16:34:54.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:34:54 np0005485008 nova_compute[192512]: 2025-10-13 16:34:54.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:34:54 np0005485008 nova_compute[192512]: 2025-10-13 16:34:54.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 13 12:34:54 np0005485008 nova_compute[192512]: 2025-10-13 16:34:54.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:34:54 np0005485008 nova_compute[192512]: 2025-10-13 16:34:54.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:34:54 np0005485008 nova_compute[192512]: 2025-10-13 16:34:54.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:34:54 np0005485008 nova_compute[192512]: 2025-10-13 16:34:54.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:34:54 np0005485008 nova_compute[192512]: 2025-10-13 16:34:54.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:34:54 np0005485008 nova_compute[192512]: 2025-10-13 16:34:54.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:34:58 np0005485008 nova_compute[192512]: 2025-10-13 16:34:58.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:34:59 np0005485008 nova_compute[192512]: 2025-10-13 16:34:59.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:34:59 np0005485008 nova_compute[192512]: 2025-10-13 16:34:59.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:34:59 np0005485008 nova_compute[192512]: 2025-10-13 16:34:59.467 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:34:59 np0005485008 nova_compute[192512]: 2025-10-13 16:34:59.468 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:34:59 np0005485008 nova_compute[192512]: 2025-10-13 16:34:59.468 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:34:59 np0005485008 nova_compute[192512]: 2025-10-13 16:34:59.468 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:34:59 np0005485008 nova_compute[192512]: 2025-10-13 16:34:59.605 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:34:59 np0005485008 nova_compute[192512]: 2025-10-13 16:34:59.606 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5823MB free_disk=73.46174240112305GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:34:59 np0005485008 nova_compute[192512]: 2025-10-13 16:34:59.606 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:34:59 np0005485008 nova_compute[192512]: 2025-10-13 16:34:59.606 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:34:59 np0005485008 nova_compute[192512]: 2025-10-13 16:34:59.671 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:34:59 np0005485008 nova_compute[192512]: 2025-10-13 16:34:59.672 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:34:59 np0005485008 nova_compute[192512]: 2025-10-13 16:34:59.691 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:34:59 np0005485008 nova_compute[192512]: 2025-10-13 16:34:59.707 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:34:59 np0005485008 nova_compute[192512]: 2025-10-13 16:34:59.708 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:34:59 np0005485008 nova_compute[192512]: 2025-10-13 16:34:59.708 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:35:01 np0005485008 podman[236009]: 2025-10-13 16:35:01.75870822 +0000 UTC m=+0.056123028 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 13 12:35:01 np0005485008 podman[236012]: 2025-10-13 16:35:01.770647645 +0000 UTC m=+0.057584995 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Oct 13 12:35:01 np0005485008 podman[236011]: 2025-10-13 16:35:01.771695077 +0000 UTC m=+0.061630951 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 12:35:01 np0005485008 podman[236010]: 2025-10-13 16:35:01.77210338 +0000 UTC m=+0.065466821 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid)
Oct 13 12:35:01 np0005485008 podman[236018]: 2025-10-13 16:35:01.799819707 +0000 UTC m=+0.080618955 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 13 12:35:04 np0005485008 nova_compute[192512]: 2025-10-13 16:35:04.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:35:05 np0005485008 podman[202884]: time="2025-10-13T16:35:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:35:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:35:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:35:05 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:35:05 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3019 "" "Go-http-client/1.1"
Oct 13 12:35:09 np0005485008 nova_compute[192512]: 2025-10-13 16:35:09.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:35:14 np0005485008 nova_compute[192512]: 2025-10-13 16:35:14.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:35:14 np0005485008 nova_compute[192512]: 2025-10-13 16:35:14.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:35:15 np0005485008 podman[236108]: 2025-10-13 16:35:15.742759339 +0000 UTC m=+0.049690667 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, io.openshift.tags=minimal rhel9, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.buildah.version=1.33.7, managed_by=edpm_ansible)
Oct 13 12:35:19 np0005485008 nova_compute[192512]: 2025-10-13 16:35:19.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct 13 12:35:19 np0005485008 nova_compute[192512]: 2025-10-13 16:35:19.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:35:19 np0005485008 nova_compute[192512]: 2025-10-13 16:35:19.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Oct 13 12:35:19 np0005485008 nova_compute[192512]: 2025-10-13 16:35:19.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:35:19 np0005485008 nova_compute[192512]: 2025-10-13 16:35:19.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct 13 12:35:19 np0005485008 nova_compute[192512]: 2025-10-13 16:35:19.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:35:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:35:19 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:35:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:35:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:35:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:35:19 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:35:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:35:19 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:35:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:35:19 np0005485008 openstack_network_exporter[205063]: ERROR   16:35:19 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:35:19 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:35:24 np0005485008 nova_compute[192512]: 2025-10-13 16:35:24.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:35:24 np0005485008 nova_compute[192512]: 2025-10-13 16:35:24.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:35:29 np0005485008 nova_compute[192512]: 2025-10-13 16:35:29.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:35:29 np0005485008 nova_compute[192512]: 2025-10-13 16:35:29.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:35:32 np0005485008 podman[236132]: 2025-10-13 16:35:32.763206134 +0000 UTC m=+0.059214385 container health_status 94b0e5626207b9f2d715ff2e737aa8cda5ad7130f6f64c4fdae99c3837b04225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251009, tcib_build_tag=92672cd85fd36317d65faa0525acf849, maintainer=OpenStack Kubernetes Operator team)
Oct 13 12:35:32 np0005485008 podman[236131]: 2025-10-13 16:35:32.76404977 +0000 UTC m=+0.064200000 container health_status 141978926d7f5a2afc5ff88b9ad128c3c81f0bfd908c20b22cde928380936727 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251009)
Oct 13 12:35:32 np0005485008 podman[236133]: 2025-10-13 16:35:32.775241081 +0000 UTC m=+0.057604775 container health_status 9f10844e31648579c11c82e464e88b7a606991e396cb2bf769f485a931e077ee (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Oct 13 12:35:32 np0005485008 podman[236130]: 2025-10-13 16:35:32.780057201 +0000 UTC m=+0.083909638 container health_status 0722f10ec9835d40e1bb9a1d56c4d2fed2b08d58faed79c5d8e3bdd377745a1e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251009, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849)
Oct 13 12:35:32 np0005485008 podman[236139]: 2025-10-13 16:35:32.80461974 +0000 UTC m=+0.089991217 container health_status a98f8be9a04b90cf37d7ac9c3143da8bbb58143f81ba62b47a5b8ae83f970035 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=92672cd85fd36317d65faa0525acf849, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251009, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 13 12:35:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:35:34.007 103642 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:35:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:35:34.008 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:35:34 np0005485008 ovn_metadata_agent[103637]: 2025-10-13 16:35:34.008 103642 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:35:34 np0005485008 nova_compute[192512]: 2025-10-13 16:35:34.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:35:34 np0005485008 nova_compute[192512]: 2025-10-13 16:35:34.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:35:35 np0005485008 podman[202884]: time="2025-10-13T16:35:35Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Oct 13 12:35:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:35:35 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 19547 "" "Go-http-client/1.1"
Oct 13 12:35:35 np0005485008 podman[202884]: @ - - [13/Oct/2025:16:35:35 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3012 "" "Go-http-client/1.1"
Oct 13 12:35:39 np0005485008 nova_compute[192512]: 2025-10-13 16:35:39.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:35:39 np0005485008 nova_compute[192512]: 2025-10-13 16:35:39.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:35:41 np0005485008 nova_compute[192512]: 2025-10-13 16:35:41.772 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:35:44 np0005485008 nova_compute[192512]: 2025-10-13 16:35:44.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:35:44 np0005485008 nova_compute[192512]: 2025-10-13 16:35:44.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:35:46 np0005485008 podman[236234]: 2025-10-13 16:35:46.754347053 +0000 UTC m=+0.051053090 container health_status 8c39edd304b3ffba3d14cd3e064b72ec59c644f75cf01db7c50a70e220950740 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, distribution-scope=public, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_id=edpm, managed_by=edpm_ansible, vcs-type=git)
Oct 13 12:35:49 np0005485008 systemd-logind[784]: New session 43 of user zuul.
Oct 13 12:35:49 np0005485008 systemd[1]: Started Session 43 of User zuul.
Oct 13 12:35:49 np0005485008 nova_compute[192512]: 2025-10-13 16:35:49.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:35:49 np0005485008 nova_compute[192512]: 2025-10-13 16:35:49.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:35:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:35:49 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Oct 13 12:35:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:35:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:35:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:35:49 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Oct 13 12:35:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:35:49 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Oct 13 12:35:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:35:49 np0005485008 openstack_network_exporter[205063]: ERROR   16:35:49 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Oct 13 12:35:49 np0005485008 openstack_network_exporter[205063]: 
Oct 13 12:35:49 np0005485008 nova_compute[192512]: 2025-10-13 16:35:49.429 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:35:49 np0005485008 nova_compute[192512]: 2025-10-13 16:35:49.430 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:35:49 np0005485008 nova_compute[192512]: 2025-10-13 16:35:49.430 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct 13 12:35:50 np0005485008 nova_compute[192512]: 2025-10-13 16:35:50.428 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:35:50 np0005485008 nova_compute[192512]: 2025-10-13 16:35:50.429 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct 13 12:35:50 np0005485008 nova_compute[192512]: 2025-10-13 16:35:50.430 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct 13 12:35:50 np0005485008 nova_compute[192512]: 2025-10-13 16:35:50.447 2 DEBUG nova.compute.manager [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct 13 12:35:51 np0005485008 nova_compute[192512]: 2025-10-13 16:35:51.442 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:35:53 np0005485008 nova_compute[192512]: 2025-10-13 16:35:53.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:35:53 np0005485008 ovs-vsctl[236433]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 13 12:35:54 np0005485008 nova_compute[192512]: 2025-10-13 16:35:54.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:35:54 np0005485008 nova_compute[192512]: 2025-10-13 16:35:54.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:35:54 np0005485008 virtqemud[192082]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 13 12:35:54 np0005485008 virtqemud[192082]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 13 12:35:54 np0005485008 virtqemud[192082]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 13 12:35:55 np0005485008 nova_compute[192512]: 2025-10-13 16:35:55.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:35:56 np0005485008 nova_compute[192512]: 2025-10-13 16:35:56.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:35:58 np0005485008 systemd[1]: Starting Hostname Service...
Oct 13 12:35:58 np0005485008 systemd[1]: Started Hostname Service.
Oct 13 12:35:59 np0005485008 nova_compute[192512]: 2025-10-13 16:35:59.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:35:59 np0005485008 nova_compute[192512]: 2025-10-13 16:35:59.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct 13 12:35:59 np0005485008 nova_compute[192512]: 2025-10-13 16:35:59.427 2 DEBUG oslo_service.periodic_task [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct 13 12:35:59 np0005485008 nova_compute[192512]: 2025-10-13 16:35:59.484 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:35:59 np0005485008 nova_compute[192512]: 2025-10-13 16:35:59.485 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:35:59 np0005485008 nova_compute[192512]: 2025-10-13 16:35:59.485 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct 13 12:35:59 np0005485008 nova_compute[192512]: 2025-10-13 16:35:59.485 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct 13 12:35:59 np0005485008 nova_compute[192512]: 2025-10-13 16:35:59.642 2 WARNING nova.virt.libvirt.driver [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct 13 12:35:59 np0005485008 nova_compute[192512]: 2025-10-13 16:35:59.643 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5455MB free_disk=73.32551574707031GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct 13 12:35:59 np0005485008 nova_compute[192512]: 2025-10-13 16:35:59.644 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct 13 12:35:59 np0005485008 nova_compute[192512]: 2025-10-13 16:35:59.644 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct 13 12:35:59 np0005485008 nova_compute[192512]: 2025-10-13 16:35:59.728 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct 13 12:35:59 np0005485008 nova_compute[192512]: 2025-10-13 16:35:59.729 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct 13 12:35:59 np0005485008 nova_compute[192512]: 2025-10-13 16:35:59.765 2 DEBUG nova.compute.provider_tree [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed in ProviderTree for provider: b038b2e7-0dfd-4adb-a174-3db2b96fc8ce update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct 13 12:35:59 np0005485008 nova_compute[192512]: 2025-10-13 16:35:59.781 2 DEBUG nova.scheduler.client.report [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Inventory has not changed for provider b038b2e7-0dfd-4adb-a174-3db2b96fc8ce based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct 13 12:35:59 np0005485008 nova_compute[192512]: 2025-10-13 16:35:59.782 2 DEBUG nova.compute.resource_tracker [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct 13 12:35:59 np0005485008 nova_compute[192512]: 2025-10-13 16:35:59.783 2 DEBUG oslo_concurrency.lockutils [None req-ea6a886e-168c-4386-9f1c-9013c2ce0c23 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
